How Is Random Assignment Like a Frying Pan?

This post is one in a series highlighting MDRC’s methodological work. Contributors discuss the refinement and practical use of research methods being employed across our organization.

Andrew Leigh’s irresistibly readable new book Randomistas takes readers on a rollicking tour of disciplines in which randomized controlled trials (RCTs) have revolutionized the way we build knowledge. From medicine to social policy to crime control, RCTs have helped to debunk myths and improve the lives of millions. We were proud to see that MDRC, and its former president Judith Gueron, figure prominently in the chapter on “Pioneers of Randomisation.”

Leigh takes on — and mostly demolishes — the most commonly repeated myths about random assignment: It’s unethical! (It totally depends on the situation and the specific way the study is designed.) It’s too expensive! (The costs of studies are mostly driven by the type of data that is collected, not by the research design.) We hear these arguments all the time.

The fact is that, in situations in which random assignment is appropriate, there is no better way to assess what difference a program makes. But random assignment is a research design — no more, no less. It is an incredibly powerful tool for answering certain kinds of questions, and less useful for answering others.

When a good cook sets out to make dinner for friends, he doesn’t usually start by saying, “I want to make something that requires a nine-inch skillet.” Rather, he decides what he and his friends will want to eat and then uses whatever pots, pans, and appliances are best suited to the recipe.

Similarly, at MDRC we don’t scan the world looking for opportunities to do RCTs. We see an issue that is affecting the well-being of large numbers of low-income people, then look for promising programs and policies to help alleviate the problem. We apply whatever research designs and methods make most sense to learn how, why, and whether those programs are furthering their goals. We are very good at conducting RCTs in the real world, but we also use lots of other methods to build knowledge. We use alternative research designs to measure the impacts of programs and system reforms when random assignment is not appropriate. We combine qualitative and quantitative methods to describe the problem or study program implementation and operations. And we use tools like predictive analytics and behavioral science to help nonprofit organizations and public systems learn about and improve their own performance.

We’re also experimenting with new ways to learn from random assignment studies. We sometimes conduct large-scale RCTs that require several years to complete, but along the way we provide other kinds of research findings and often help the implementing organizations build new capabilities that will outlast the study itself. We also use RCTs to test the short-term impacts of different operational strategies inside a program or system, and those studies may be completed in a matter of months.

To us, debating the pros and cons of random assignment in the abstract is a waste of time. There are many kinds of RCTs — and many other useful strategies for building knowledge. Our mission is to use evidence to improve the lives of low-income people, and we use whatever pots, pans, and appliances will help us do that. In an era of sluggish wages, rapid technological change, and staggering inequality, the need for creative solutions and flexible research methods is greater than ever.