MDRC Researchers Publish on Variation in Multisite Trials

Over the last two decades, randomized controlled trials of educational and workforce development programs have provided a wealth of information on the average effectiveness of these programs. During that time researchers have also come to appreciate that program effects may vary from one program location to another.

But how much do program effects actually vary? In an article in the Journal of Research on Educational Effectiveness, Mike Weiss, Howard Bloom, Himani Gupta, and Daniel Cullinan from MDRC and Natalya Verbitsky-Savitz and Alma Vigil from Mathematica Policy Research try to answer that question. The article uses data from 16 of the largest randomized controlled trials in education and workforce development to examine how much the effects of interventions vary among sites (schools, for example). In around a third of all cases, the answer is “not much at all”; in another third, the variation is substantial, including an example where at some sites it is advantageous to be part of the intervention and at some sites it is not. The article also explores hypotheses about which features of interventions and studies are correlated with high and low variability in program effects. The article can be downloaded from the journal’s website.