This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
In a speech given at a conference sponsored by the French government on the role of experimental studies in reducing poverty, MDRC President Gordon Berlin described how the results of random assignment studies have acted as powerful levers for changing social policy in the United States.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
In these remarks, delivered at Speaker Nancy Pelosi’s National Summit on America’s Children on May 22, MDRC President Gordon Berlin summarizes rigorous research evidence showing that supplementing the earnings of parents helps raise families out of poverty and improves the school performance of young children.
In his testimony before the House Ways and Means Subcommittee on Income Security and Family Support, MDRC President Gordon Berlin argues that the most direct way to alleviate poverty is to tackle the legacy of falling wages, particularly for men with less education.