Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.
This paper illustrates how to design an experimental sample for measuring the effects of educational programs when whole schools are randomized to a program and control group. It addresses such issues as what number of schools should be randomized, how many students per school are needed, and what is the best mix of program and control schools.
A Manual for Qualitative Data Management and Analysis
Planning for the Jobs-Plus Demonstration