This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
Strategies for Interpreting and Reporting Intervention Effects on Subgroups
This revised paper examines strategies for interpreting and reporting estimates of intervention effects for subgroups of a study sample. Specifically, the paper considers: why and how subgroup findings are important for applied research, the importance of prespecifying subgroups before analyses are conducted, and the importance of using existing theory and prior research to distinguish between subgroups for which study findings are confirmatory, as opposed to exploratory.
Howard Bloom’s Remarks on Accepting the Peter H. Rossi Award
In a speech before the Association for Public Policy Analysis and Management Conference on November 5, 2010, Howard Bloom, MDRC’s Chief Social Scientist, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
This paper is the first step in a study of instrumental variables analysis with randomized trials to estimate the effects of settings on individuals. The goal of the study is to examine the strengths and weaknesses of the approach and present them in ways that are broadly accessible to applied quantitative social scientists.
In some experimental evaluations of classroom- or school-level interventions, random assignment is conducted at the student level and the program is delivered at the higher level. This paper clarifies the correct causal interpretation of “program impacts” when this study design is used and discusses the implications and limitations of this research design. A real example is used to demonstrate the paper’s key points.
This paper illustrates how to design an experimental sample for measuring the effects of educational programs when whole schools are randomized to a program and control group. It addresses such issues as what number of schools should be randomized, how many students per school are needed, and what is the best mix of program and control schools.