New Journal Article Examines the Likely Generalizability of Educational Treatment-Effect Estimates from Regression Discontinuity Designs

The forthcoming issue of the Journal of Research on Educational Effectiveness features an article, “Using Data from Randomized Trials to Assess the Likely Generalizability of Educational Treatment-Effect Estimates from Regression Discontinuity Designs,” by MDRC’s Howard S. Bloom, Andrew Bell, and Kayla Reiman.

In recent years, the regression discontinuity design (RDD) has gained widespread recognition as a quasi-experimental method that can produce valid causal estimates of treatment effects in a variety of fields. Although various, sometimes complex, estimation methods have been used to implement RDDs, their logic is straightforward and intuitively appealing, and there is often good reason to expect a well-implemented RDD to approximate a localized randomized trial. (See here for a brief description of RDDs by MDRC’s Pei Zhu.) However, there is widespread concern about generalizing treatment-effect estimates from RDD designs to broader, more policy-relevant, sub-populations. This is because RDDs apply to situations in which treatment assignment is based solely on whether candidates exceed (or fall below) a specified threshold value of a quantitative rating (for instance, an academic pretest score in education). Consequently, in theory, RDD treatment-effect estimates apply only to candidates with ratings at the RDD threshold value, and to date there is limited guidance for making judgments in practice about how far away from this threshold value one can generalize RDD findings.   

To help inform these judgments for education research in particular, and for research in other policy areas more generally, the new article explores how treatment effects for students in randomized trials of six major educational interventions — ranging from preschool to high school — vary systematically with students’ pretest scores. The article: (1) reflects on the factors that theoretically can limit RDD generalizability, (2) reviews recent approaches for assessing and enhancing this generalizability, and (3) presents and interprets empirical findings about the likely generalizability of RDD treatment-effect estimates for educational interventions to which students are assigned based on an academic pretest score. These findings suggest that, in practice, RDD treatment-effect estimates can be considerably more generalizable than what their theoretical properties might suggest.

The editorial staff of the Journal of Research on Educational Effectiveness (which is published by the Society for Research on Educational Effectiveness) has generously made the article open access until May 31.