This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
This paper explores the use of instrumental variables analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome.
Despite the growing popularity of the use of regression discontinuity analysis, there is only a limited amount of accessible information to guide researchers in the implementation of this research design. This paper provides an overview of the approach and, in easy-to-understand language, offers best practices and general guidance for practitioners.
Using an alternative to classical statistics, this paper reanalyzes results from three published studies of interventions to increase employment and reduce welfare dependency. The analysis formally incorporates prior beliefs about the interventions, characterizing the results in terms of the distribution of possible effects, and generally confirms the earlier published findings.
This MDRC research methodology working paper examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion of the design and analysis of randomized experiments for measuring the impact of social or educational interventions.