This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
This paper illustrates how to design an experimental sample for measuring the effects of educational programs when whole schools are randomized to a program and control group. It addresses such issues as what number of schools should be randomized, how many students per school are needed, and what is the best mix of program and control schools.
A Manual for Qualitative Data Management and Analysis
New Directions in Evaluations of American Welfare-to-Work and Employment Initiatives
Methodological Lessons from an Evaluation of Accelerated Schools
The Effects of Program Management and Services, Economic Environment, and Client Characteristics