Semistructured interviews involve an interviewer asking some prespecified, open-ended questions, with follow-up questions based on what the interviewee has to say. This Reflections on Methodology post describes a semistructured interview protocol recently used to explore how children who experience poverty perceive their situations, their economic status, and public benefit programs.
A Primer for Researchers Working with Education Data
Predictive modeling estimates individuals’ probabilities of future outcomes by building and testing a model using data on similar individuals whose outcomes are already known. The method offers benefits for continuous improvement efforts and efficient allocation of resources. This paper explains MDRC’s framework for using predictive modeling in education.
This report provides recommendations for an evaluation of coaching that may impact teacher and classroom practices in Head Start and other early childhood settings — including about the research questions; the design of the impact study, implementation research, and cost analysis; and logistical challenges for carrying out the design.
Design Options for an Evaluation of Head Start Coaching
Using a study of coaching in Head Start as an example, this report reviews potential experimental design options that get inside the “black box” of social interventions by estimating the effects of individual components. It concludes that factorial designs are usually most appropriate.
An Empirical Assessment Based on Four Recent Evaluations
This reference report, prepared for the National Center for Education Evaluation and Regional Assistance of the Institute of Education Sciences (IES), uses data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test.
This paper provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions.
This MDRC working paper on research methodology provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of interventions on children.
No universal guideline exists for judging the practical importance of a standardized effect size, a measure of the magnitude of an intervention’s effects. This working paper argues that effect sizes should be interpreted using empirical benchmarks — and presents three types in the context of education research.
Empirical Guidance for Studies That Randomize Schools to Measure the Impacts of Educational Interventions
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement.
Relying on 427 classroom observations conducted over a three-year period, this study traces changes in teachers’ instructional practices in the First Things First schools.