This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
In a speech before the Association for Public Policy Analysis and Management Conference on November 7, 2008, Judith M. Gueron, President Emerita and Scholar in Residence at MDRC, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
This MDRC working paper on research methodology explores two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions.
This MDRC working paper on research methodology provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of interventions on children.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.