Estimating Causal Effects of Education Interventions Using a Two-Rating Regression Discontinuity Design

Lessons from a Simulation Study


By Kristin Porter, Sean F. Reardon, Fatih Unlu, Howard Bloom, Joseph P. Robinson-Cimpian

A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the “surface” method, the “frontier” method, the “binding-score” method, and the “fuzzy instrumental variables” method. This paper uses a series of simulations to evaluate the relative performance of each of these four methods under a variety of different data-generating models. Focusing on a two-rating RDD (2RRDD), we compare the methods in terms of their bias, precision, and mean squared error when implemented as they most likely would be in practice — using optimal bandwidth selection. We also apply the lessons learned from the simulations to a real-world example that uses data from a study of an English learner reclassification policy. Overall, this paper makes valuable contributions to the literature on MRRDDs in that it makes concrete recommendations for choosing among MRRDD estimation methods, for implementing any chosen method using local linear regression, and for providing accurate statistical inferences.

Document Details

Publication Type
Methodological Publication
Date
November 2014
Porter, Kristin, Sean Reardon, Fatih Unlu, Howard Bloom, and Joseph Robinson-Cimpian. 2014. “Estimating Causal Effects of Education Interventions Using a Two-Rating Regression Discontinuity Design Lessons from a Simulation Study.” New York: MDRC.