No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.
Final Report on the Center for Employment Training Replication Sites
The Center for Employment Training (CET) in San Jose, California, produced large, positive employment and earnings effects for out-of-school youth in the late 1980s. However, in this replication study, even the highest-fidelity sites did not increase employment or earnings for youth over the 54-month follow-up period, despite short-term positive effects for women.
Evidence from a Sample of Recent CET Applicants
This working paper examines employment and earnings over a four-year period for a group of disadvantaged out-of-school youth who entered the Evaluation of the Center for Employment Training (CET) Replication Sites between 1995 and 1999. It assesses the importance of three key factors as barriers to employment: lack of a high school diploma, having children, and having an arrest record.
Statistical Implications for the Evaluation of Education Programs
Final Report on Ohio’s Welfare Initiative to Improve School Attendance Among Teenage Parents
Final Report on a Program for School Dropouts
This report, which completes the JOBSTART Demonstration, addresses issues closely linked to the nation’s ongoing debate about how best to improve the employment and earnings prospects of low-skilled, economically disadvantaged young people, who otherwise live outside the economic mainstream.