An earlier post in this series discussed considerations for reporting and interpreting cross-site impact variation and for designing studies to investigate such cross-site variation. This post discusses how those ideas were applied to address two broad questions in the Mother and Infant Home Visiting Program Evaluation.
Data from management information systems, direct observations, and the reactions of staff members can help programs understand themselves, identify areas for improvement, and set goals. This infographic presents examples of how programs in the Building Bridges and Bonds study used data from different sources to gain insights.
Lessons from the Los Angeles College Promise Program
The Los Angeles College Promise aims to increase college access and success by offering support services and a scholarship that covers tuition and fees for two years. This brief highlights how it has established a cycle of continual program improvement that uses insights from behavioral science and involves the students themselves.
Part I of this two-part post discussed MDRC’s work with practitioners to construct valid and reliable measures of implementation fidelity to an early childhood curriculum. Part II examines how those data can reveal associations between levels of fidelity and gains in children’s academic skills.
The Every Student Succeeds Act (ESSA) requires school districts to implement a five-measure accountability system. Thirty-five states have ESSA accountability systems that include measures of career readiness. Here is one example of how a school district might effectively strengthen students’ career readiness using Career Academies.
Lessons from the Grameen America Evaluation
In any study, there is a tension between research and program needs. This program’s group-based microloan model presented particular challenges for random assignment. Reflections in Methodology looks at how the research design was adapted to allow a fair test of the program’s effectiveness without hampering its ability to operate.
As an alternative to random assignment, a regression discontinuity design takes advantage of situations where program eligibility is determined by whether a score exceeds a threshold. With careful attention to assumptions, analysis, and interpretation, this quasi-experimental design can provide rigorous estimates of program effects. Reflections on Methodology outlines some considerations.
Schools use individual screening tests to identify students at risk of falling behind in their reading levels. Could predictive analytics, incorporating multiple composite and subsection scores from a series of tests over time, do a better job of identifying at-risk students? Reflections on Methodology gives an example of this approach.
Lessons from the Grameen America Formative Evaluation
Random assignment is prized for its rigor, but it’s not always feasible to carry out. This Reflections in Methodology post outlines other strong options for studying the effects of a program and illustrates the application of some key considerations in a specific context.
This paper explores the use of instrumental variables analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome.