Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
Lessons from a Simulation Study
This paper makes valuable contributions to the literature on multiple-rating regression discontinuity designs (MRRDDs). It makes concrete recommendations for choosing among existing MRRDD estimation methods, for implementing any chosen method using local linear regression, and for providing accurate statistical inferences.
Design Options for an Evaluation of Head Start Coaching
Using a study of coaching in Head Start as an example, this report reviews potential experimental design options that get inside the “black box” of social interventions by estimating the effects of individual components. It concludes that factorial designs are usually most appropriate.
This report provides recommendations for an evaluation of coaching that may impact teacher and classroom practices in Head Start and other early childhood settings — including about the research questions; the design of the impact study, implementation research, and cost analysis; and logistical challenges for carrying out the design.
In many evaluations, individuals are randomly assigned to experimental arms and then grouped to receive services. In this situation, accounting for grouping may be necessary when estimating the impact estimate’s standard error. This paper demonstrates that nonrandom sorting of individuals into groups can bias the standard error reported by common estimation approaches.
In a speech before the Association for Public Policy Analysis and Management Conference on November 7, 2008, Judith M. Gueron, President Emerita and Scholar in Residence at MDRC, accepted the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation.
This MDRC working paper on research methodology explores two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions.
This MDRC working paper on research methodology provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of interventions on children.