The Center for Applied Behavioral Science (CABS) combines MDRC’s decades of experience tackling social policy issues with insights from behavioral science. This graphic explains the CABS’s approach to solving problems.
The SIMPLER framework was developed for the Behavioral Interventions to Advance Self-Sufficiency (BIAS) project ― the first major effort to apply behavioral insights to human services programs in the United States. SIMPLER summarizes several key behavioral concepts that can guide practitioners interested in using behavioral insights to enhance service delivery.
MDRC launches the first of a five-part web series from the Chicago Community Networks study — a mixed-methods initiative that combines formal social network analysis with in-depth field surveys of community practitioners. It measures how community organizations collaborate on local improvement projects and how they come together to shape public policy.
WorkAdvance connects low-income job seekers to high-demand sectors that offer quality jobs with strong career pathways. This infographic describes the program model and its implementation in four locations and presents encouraging evidence of WorkAdvance’s effects on boosting earnings.
Jobs-Plus – a “place-based,” workforce-development model proven to help public housing residents find employment – is about to be replicated across the country. This infographics depicts the program model, its effects on earnings, and the history of its development over the past 20 years.
Using an alternative to classical statistics, this paper reanalyzes results from three published studies of interventions to increase employment and reduce welfare dependency. The analysis formally incorporates prior beliefs about the interventions, characterizing the results in terms of the distribution of possible effects, and generally confirms the earlier published findings.
An Empirical Assessment Based on Four Recent Evaluations
This reference report, prepared for the National Center for Education Evaluation and Regional Assistance of the Institute of Education Sciences (IES), uses data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test.
In some experimental evaluations of classroom- or school-level interventions, random assignment is conducted at the student level and the program is delivered at the higher level. This paper clarifies the correct causal interpretation of “program impacts” when this study design is used and discusses the implications and limitations of this research design. A real example is used to demonstrate the paper’s key points.
This paper illustrates how to design an experimental sample for measuring the effects of educational programs when whole schools are randomized to a program and control group. It addresses such issues as what number of schools should be randomized, how many students per school are needed, and what is the best mix of program and control schools.