Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
This paper explores the use of instrumental variables analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome.
Despite the growing popularity of the use of regression discontinuity analysis, there is only a limited amount of accessible information to guide researchers in the implementation of this research design. This paper provides an overview of the approach and, in easy-to-understand language, offers best practices and general guidance for practitioners.
Using an alternative to classical statistics, this paper reanalyzes results from three published studies of interventions to increase employment and reduce welfare dependency. The analysis formally incorporates prior beliefs about the interventions, characterizing the results in terms of the distribution of possible effects, and generally confirms the earlier published findings.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.