Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
A Literature Review
Examining the scholarly literature published since a seminal review in 2000, this working paper discusses the principles that underlie project-based learning, how it has been used in K-12 settings, the challenges teachers have confronted in implementing it, and what is known about its effectiveness in improving students’ learning outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
Results from a Performance-Based Scholarship Experiment
This random assignment study examines the long-term impacts of a program at The University of New Mexico offering low-income first-year students enhanced academic advising and financial aid that is contingent on performance. It finds that the program increased credit hour accumulation during the first two years and graduation rates after five years.
Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.
This random assignment study examines the long-term impacts of a community college program offering financial aid that is contingent on academic performance. Focusing on low-income parents, mostly mothers, it finds that the program decreased the time it took students to earn a degree but did not increase employment or earnings.
Using data from the Head Start Impact Study, this paper examines variation in Head Start effects across individual children, policy-relevant subgroups of children, and Head Start centers. It finds that past estimates of the average effect of Head Start programs mask a wide range of relative program effectiveness.