Beyond measuring average program impacts, it is important to understand how impacts vary. This paper gives a broad overview of the conceptual and statistical issues involved in using multisite randomized trials to learn about and from variation in program effects across individuals, across subgroups of individuals, and across program sites.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
Planning for the Jobs-Plus Demonstration
Statistical Implications for the Evaluation of Education Programs