An earlier post in this series discussed considerations for reporting and interpreting cross-site impact variation and for designing studies to investigate such cross-site variation. This post discusses how those ideas were applied to address two broad questions in the Mother and Infant Home Visiting Program Evaluation.
Part I of this two-part post discussed MDRC’s work with practitioners to construct valid and reliable measures of implementation fidelity to an early childhood curriculum. Part II examines how those data can reveal associations between levels of fidelity and gains in children’s academic skills.
Lessons from the Grameen America Evaluation
In any study, there is a tension between research and program needs. This program’s group-based microloan model presented particular challenges for random assignment. Reflections in Methodology looks at how the research design was adapted to allow a fair test of the program’s effectiveness without hampering its ability to operate.
As an alternative to random assignment, a regression discontinuity design takes advantage of situations where program eligibility is determined by whether a score exceeds a threshold. With careful attention to assumptions, analysis, and interpretation, this quasi-experimental design can provide rigorous estimates of program effects. Reflections on Methodology outlines some considerations.
Schools use individual screening tests to identify students at risk of falling behind in their reading levels. Could predictive analytics, incorporating multiple composite and subsection scores from a series of tests over time, do a better job of identifying at-risk students? Reflections on Methodology gives an example of this approach.
Lessons from the Grameen America Formative Evaluation
Random assignment is prized for its rigor, but it’s not always feasible to carry out. This Reflections in Methodology post outlines other strong options for studying the effects of a program and illustrates the application of some key considerations in a specific context.
An Empirical Assessment Based on Four Recent Evaluations
This reference report, prepared for the National Center for Education Evaluation and Regional Assistance of the Institute of Education Sciences (IES), uses data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test.
This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools — which comprise three levels of clustering (students in classrooms in schools) — to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.