Assessing an intervention’s effects on multiple outcomes increases the risk of false positives. Procedures that make adjustments to address this risk can reduce power, or the probability of detecting effects that do exist. MDRC’s Reflections on Methodology discusses how to estimate power when making adjustments as well as alternative definitions of power.
To improve outcomes among high-interest borrowers, policymakers need to understand what is driving usage. This second post in MDRC’s Reflections on Methodology series discusses how a data discovery process revealed clusters of borrowers who differed greatly in the kinds of loans and lenders they used and in their loan outcomes.
The SIMPLER framework was developed for the Behavioral Interventions to Advance Self-Sufficiency (BIAS) project ― the first major effort to apply behavioral insights to human services programs in the United States. SIMPLER summarizes several key behavioral concepts that can guide practitioners interested in using behavioral insights to enhance service delivery.
Machine learning algorithms, when combined with the contextual knowledge of researchers and practitioners, offer service providers nuanced estimates of risk and opportunities to refine their efforts. The first post of a new series, Reflections on Methodology, discusses how MDRC helps organizations make the most of predictive modeling tools.
Encouraging Additional Summer Enrollment (EASE) aims to increase summer enrollment rates among low-income community college students using insights from behavioral science. This infographic describes some of the benefits of summer enrollment, reasons why students may not enroll in summer, and interventions the EASE team designed to address low enrollment rates.
MDRC launches the first of a five-part web series from the Chicago Community Networks study — a mixed-methods initiative that combines formal social network analysis with in-depth field surveys of community practitioners. It measures how community organizations collaborate on local improvement projects and how they come together to shape public policy.
How a District Might Find a Program That Meets Local Needs
For school districts striving to meet both ESSA requirements and specific educational needs, this infographic shows how evidence can guide decisions. The evaluation of Reading Partners, a one-on-one volunteer tutoring program, serves as an example.
This paper examines the properties of two nonexperimental study designs that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. The paper looks at the internal validity and precision of these two designs, using the example of the federal Reading First program as implemented in a midwestern state.
This paper presents a conceptual framework for designing and interpreting research on variation in program effects. The framework categorizes the sources of program effect variation and helps researchers integrate the study of variation in program effectiveness and program implementation.
This paper provides a detailed discussion of the theory and practice of modern regression discontinuity. It describes how regression discontinuity analysis can provide valid and reliable estimates of general causal effects and of the specific effects of a particular treatment on outcomes for particular persons or groups.