This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
Using Volunteers to Improve the Academic Outcomes of Underserved Students
School-based mentoring programs have been shown to improve students’ academic performance and self-confidence. This study examines what makes the Big Brothers Big Sisters of America school-based mentoring program effective, offering key insights for practitioners. It also contributes a theoretical structure with which to assess other randomized evaluations of such programs.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
After one year, CEO’s transitional jobs program generated a large but short-lived increase in employment for ex-prisoners. A subgroup of recently released prisoners showed positive effects on recidivism: They were less likely to have their parole revoked, to be convicted of a felony, and to be reincarcerated than the control group.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
Building Evidence About What Works to Improve Self-Sufficiency
This working paper argues for building a stronger base of evidence in the housing-employment policy arena through an expanded use of randomized controlled trials.