How Early Implementation Research Can Inform Program Scale-Up Efforts

Close-up of a computer key that reads "Scale Up"
By Michelle S. Manno, Jennifer Miller Gaubert

Implementation research in program evaluations plays a critical role in helping researchers and practitioners understand how programs operate, why programs did or did not produce impacts, what factors influenced the staff’s ability to operate the intervention, and how staff members and participants view the program. Often, implementation research does not directly address scale-up questions — whether, when, and how effective programs can be expanded — until decision makers and evaluators are at the cusp of considering this step. Yet implementation research from the early stages of evidence building can be harnessed to inform program scale-up later on.

Building evidence of an effective program often occurs in steps: developing the model, refining and adapting it, and conducting efficacy trials in the field (see, for example, Bangser 2014). As programs appear promising, momentum builds for program expansion. Across policy domains, many terms are used to describe activities aimed at expanding the reach of evidence-based programs and practices: “replication,” “scale-up”  (sometimes “horizontal” or “vertical”), “going to scale,” “dissemination,” and “scaling impact,” to name just a few. Hartmann and Linn (2008), for example, define scale-up as “expanding, replicating, adapting, and sustaining successful policies, programs or projects in geographic space and over time to reach a greater number of people.” Three dimensions are evident in many definitions of scale-up:

  • Reach. In scale-up, the goal is to increase the number of people served by a program, service, or technology. The goals for reach could be relatively small in scope (for example, increasing families served from 100 to 500) or more ambitious (expanding a program from two schools to a whole district).
  • Organization. Scale-up may require that additional service providers be trained. The providers could be organizations similar to those already providing the service, or they might be different ones with little experience in the field.
  • Geography. Reaching out to new geographical areas may be a goal, by delivering services in a similar organizational setting in a new city, county, or state

Even in evaluations of programs that are operating as pilots, demonstrations, or early efficacy tests, implementation research can lay important groundwork for informing future scale-up. Regardless of whether the program had impacts on the specified outcomes, implementation research in these early stages can help researchers

  • develop hypotheses about why the program had impacts (or why it didn’t);

  • learn what might make the program stronger if future scale-up efforts are undertaken, or whether the program should even be continued; and

  • learn what adaptations might be needed to run the program more broadly, in other contexts with other populations.

To illustrate these points, we list some relevant implementation research questions in Table 1, grouped by the categories described by Weiss, Bloom, and Brock (2013, 2014), as below. Many of these questions will be familiar to program evaluators. We’re suggesting that evaluators should begin thinking about them in earlier stages of program development, with an explicit eye toward how the data could be applied in future scale-up efforts.

Treatment as planned, offered, and received. Typically, implementation research in the early stages focuses on describing rather than evaluating programs, including a program’s fidelity to the intervention or treatment, or how closely treatment received adheres to the model (Cordray and Pion 2006). This research can inform the development of meaningful metrics for assessing fidelity in future scale-up efforts (although standards of fidelity may change to accommodate program adaptation). Questions at this stage also can help us understand which elements of the program model are most essential, often an area of concern in expansion because of cost considerations.

Implementation plan. In outlining how organizations will operate the program, this plan covers changes needed in staffing, training, and coaching, as well as other necessary support, such as partnerships with other organizations. Implementation research questions are likely to focus not only on these topics but also on plans for outreach and recruitment of participants and on the degree to which the program is flexible or standardized. To maximize learning for future scale-up, program evaluators should examine fidelity to the implementation plan itself (as noted in Weiss, Bloom, and Brock) and examine how it was enacted and why things seemed to work — or not (Dunst, Trivette, Masiello, and McInerney, 2008).

Client characteristics. In early studies, the social demographics and other risk factors of the populations that are recruited, enrolled, and served — and how they may differ from the intended target populations — are primary questions. As a way to inform potential scale-up efforts, research teams can also explore how the population characteristics influenced modifications to the program model or implementation plan.

External context. Investigating factors external to the implementing organization — such as funding, public policies, or social demographics — can provide insight into what systems or structures appear to support or inhibit program success. This information could be used in future iterations of the program to inform decisions about the feasibility of expansion in particular locations or to identify key policy changes that would be needed for scale-up to occur.

Organizational factors. Implementation studies can identify staffing, management, and other organizational factors that may affect program success. Staff characteristics include academic qualifications and work experience. Management and organizational factors include the presence of strong leadership, a supportive organizational environment, and organizational resources. If collected in a systematic way, this information can help highlight early operational lessons and may suggest factors to explore in future hypothesis tests.

Service contrast. Service contrast is the difference in services received by the program group and by the control group. Understanding the service contrast is a key goal of implementation studies. This could take various forms, including examining other services in the community that the control group may have access to or explicitly measuring the services received by the control group and comparing them with what the program group received. Taken together with information from the other categories, service contrast helps inform the story about why certain findings emerge. (Our colleagues Gayle Hamilton and Susan Scrivener are currently writing a framework paper that discusses these issues in more detail.)

While decisions about whether a program merits expansion are generally made once impact analyses are completed, a team can use implementation research questions to structure its research agenda to inform the scale-up phase, should the results warrant it.

Suggested citation for this post:

Manno, Michelle S., and Jennifer Miller Gaubert. 2018. “How Early Implementation Research Can Inform Program Scale-Up Efforts.” Implementation Research Incubator (blog), January. https://www.mdrc.org/publication/how-early-implementation-research-can-inform-program-scale-efforts.