An earlier post in this series discussed considerations for reporting and interpreting cross-site impact variation and for designing studies to investigate such cross-site variation. This post discusses how those ideas were applied to address two broad questions in the Mother and Infant Home Visiting Program Evaluation.
Part I of this two-part post discussed MDRC’s work with practitioners to construct valid and reliable measures of implementation fidelity to an early childhood curriculum. Part II examines how those data can reveal associations between levels of fidelity and gains in children’s academic skills.
Lessons from the Grameen America Evaluation
In any study, there is a tension between research and program needs. This program’s group-based microloan model presented particular challenges for random assignment. Reflections in Methodology looks at how the research design was adapted to allow a fair test of the program’s effectiveness without hampering its ability to operate.
As an alternative to random assignment, a regression discontinuity design takes advantage of situations where program eligibility is determined by whether a score exceeds a threshold. With careful attention to assumptions, analysis, and interpretation, this quasi-experimental design can provide rigorous estimates of program effects. Reflections on Methodology outlines some considerations.
Schools use individual screening tests to identify students at risk of falling behind in their reading levels. Could predictive analytics, incorporating multiple composite and subsection scores from a series of tests over time, do a better job of identifying at-risk students? Reflections on Methodology gives an example of this approach.
Lessons from the Grameen America Formative Evaluation
Random assignment is prized for its rigor, but it’s not always feasible to carry out. This Reflections in Methodology post outlines other strong options for studying the effects of a program and illustrates the application of some key considerations in a specific context.
Howard Bloom’s Remarks on Accepting the Peter H. Rossi Award
In a speech before the Association for Public Policy Analysis and Management Conference on November 5, 2010, Howard Bloom, MDRC’s Chief Social Scientist, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
Strategies for Interpreting and Reporting Intervention Effects on Subgroups
This revised paper examines strategies for interpreting and reporting estimates of intervention effects for subgroups of a study sample. Specifically, the paper considers: why and how subgroup findings are important for applied research, the importance of prespecifying subgroups before analyses are conducted, and the importance of using existing theory and prior research to distinguish between subgroups for which study findings are confirmatory, as opposed to exploratory.
This paper is the first step in a study of instrumental variables analysis with randomized trials to estimate the effects of settings on individuals. The goal of the study is to examine the strengths and weaknesses of the approach and present them in ways that are broadly accessible to applied quantitative social scientists.
In some experimental evaluations of classroom- or school-level interventions, random assignment is conducted at the student level and the program is delivered at the higher level. This paper clarifies the correct causal interpretation of “program impacts” when this study design is used and discusses the implications and limitations of this research design. A real example is used to demonstrate the paper’s key points.