Using Theories of Change to Systematically Take Stock of Program Changes During COVID-19


The disproportionately negative impact of the coronavirus pandemic on the well-being of people of color and people with low incomes makes the programs and policies MDRC studies more urgent than ever. Many of our nonprofit and government partners are rethinking core components of their program models or how services are delivered because of the pandemic (see here and here for examples). Implementation researchers at MDRC are asking: how can we help?

In some cases, we can help by acknowledging that research efforts need to be paused or altered so that our partners have the bandwidth to manage the current crisis and so that program participants aren’t burdened with additional data collection.

In other cases, however, nonprofits and government agencies serving people most vulnerable to the health and economic consequences of the pandemic are looking to external researchers to help them learn from this moment. Learning now can help us understand and address the inequities exposed by or arising from this crisis and make our programs and policies stronger for the future. But considering whether and how to stop, change, or restart evaluations can be difficult to navigate.

At MDRC, we’re turning for guidance to programs’ theories of change, which we often use to guide our evaluation plans and to support our partners’ program improvement efforts (see here and here for examples). A strong theory of change identifies whom the program intends to serve (target population), distills the core components of the program (inputs and the activities they result in), and articulates the mechanisms (mediators) through which the program intends to achieve its goals (outcomes). In the context of an impact evaluation, a theory of change also provides a lens for understanding how the program’s core components differ from what would be available to the target population in the program’s absence (counterfactual condition or service contrast).

Theories of change can be a compass for navigating the uncharted waters that we all now find ourselves in as program operators and evaluators. By systematically considering how COVID-19 is affecting each aspect of the theory of change, program operators and evaluators can better assess whether and how learning agendas and evaluation plans need to change. Below are some examples of how a social program’s work might change because of the pandemic and what the implications might be for its theory of change. We also offer guiding questions that evaluators and program partners can use to document what about the program has changed, whether and how to proceed with evaluation activities, and where program operators might want to go next.

Target Population

Example: A local workforce training program for young men without a high school diploma offers weekly group training sessions to people who have filed for Unemployment Insurance (UI). Because of the rapid rise in UI claims, the target population has expanded well beyond the program’s capacity, and participants have expanding needs. Also, workshops are now virtual, meaning that participants must have access to appropriate technology. Given the expanding size and changing characteristics of their target population, the program is now considering new recruitment and intake procedures to support targeting and differentiation of their limited program resources. For example, their recruitment procedures now include a questionnaire to assess other services beyond training that a prospective participant might need (for instance, food and housing assistance or healthcare). The training program has partnered with another community-based organization to connect prospective participants to those resources even if they cannot be enrolled in the training program immediately.

Questions to document and inform changes in evaluation plans:

  • Has the definition of the target population changed? If so, why and how?
  • Has the size of the target population changed? If so, why and how?
  • Have the needs of the target population changed? If so, why and how?

Inputs and Expected Activities

Example: A daily afterschool tutoring and STEM enrichment program for middle school students retains the core content of its academic curriculum but has shifted expectations for the student activities. Teachers now offer the curriculum’s content via daily videos, but the hands-on application projects have shifted to optional activities that children can do at any time during the week. Daily homework help sessions have shifted to online meetups where students can discuss the videos and ask the staff for help on their school assignments. The program has noticed a steady decline in participation in the afternoon online meetups over the course of school closures and is considering adjusting their model to individualized outreach to all participating students throughout the day to support their school work and shifting the focus of the afternoon virtual meetup to be a time when students can engage in their enrichment activities with their peers.

Questions to document and inform changes in evaluation plans:

  • Which program components are being continued in their original form? Are the intended content, quantity/dosage, quality, and conveyance/mode all the same as before the COVID-19?
  • Which program components have been intentionally paused because of COVID-19? Why were they paused?
  • Which program components (and which aspects of them) have been adapted because of COVID-19? Does the program aim to buffer participants from any of the negative consequences of the pandemic? Why and how were components adapted? 
  • Do program components vary depending on person, location, time, or other aspects?

Mediators

Example: An early childhood program aims to improve child outcomes through weekly parent workshops about child development and through daily text messages to parents with tips about how they can support their children’s learning at home. During the school and day care closures of COVID-19, the weekly in-person workshops have become virtual sessions and the text messages continue. With children spending more time at home with their parents, the parent behaviors that serve as mediators in the program’s theory of change have become a larger fraction of the child’s learning opportunities each week.

Questions to document and inform changes in evaluation plans:

  • Are the mediators still relevant and achievable?
  • Will the mediators still be able to affect the target population in the ways assumed by the theory of change?
  • Have any mediators become more or less important to achieving the program’s outcomes because of COVID-19?

Program Outcomes

Example: A program that provides work supports to employees in low-wage jobs considers job stability over a 12-month period as a short-term outcome and increased earnings over five years as a longer-term outcome. The unprecedented unemployment resulting from COVID-19 is having a disproportionately negative impact on the program’s capacity to achieve its short-term outcome of job stability. However, program participants may be better equipped with skills and social networks to re-enter the labor force when the economy is recovering. As a result, the program is keeping their long-term earnings goal intact but considering adding new short-term measures to their evaluation plans. Specifically, they want to assess other meaningful outcomes, such as connection to the community and self-efficacy, which could be affected in the near term and also contribute to achievement of the long-term outcomes.

Questions to document and inform changes in evaluation plans:

  • Are the program’s outcomes still relevant/achievable?
  • If the program components changed, how do those changes affect the likelihood of achieving the target short- and longer-term outcomes?
  • Are COVID-19 shocks and consequences likely to affect short- or longer-term outcomes of program participants and nonparticipants in the same way?

Counterfactual Condition or Treatment Contrast

Example: A nonprofit organization works with a select group of a district’s elementary schools to provide instructional materials and on-site coaching supports to teachers. The nonprofit’s program is being evaluated using an experimental design. With the switch to remote learning, the organization has made their instructional materials freely available online to everyone. Coaching support for participating schools has been paused and replaced by optional webinars for all teachers about how to implement the materials in a remote learning environment. Prior to the shift, the teachers in the counterfactual condition did not have access to the instructional materials while all teachers in the participating schools used the materials and participated in the coaching in person. With this shift to remote learning and the program’s offerings, the treatment contrast in the impact evaluation is diluted.

Questions to document and inform changes in evaluation plans:

  • Have services available to or taken up by the control group changed in terms of content, quantity, quality, or conveyance? Why and how?

Conclusion: Taking Stock

Systematically moving through a program’s theory of change to consider whether and how COVID-19 is affecting each aspect allows implementation researchers and program partners to decide how to shift evaluation plans. In some cases, this exercise may suggest that a program’s theory of change has been altered so dramatically that an evaluation needs to be paused completely or re-planned so that research questions and data sources align with the new reality. For example, the professional development organization described in the last example has stopped a core component of their model and introduced new services to new populations. This organization would likely want to pause evaluation activities and to consider developing a new theory of change and associated learning agenda for their new program offerings. 

In other cases, this exercise may suggest that an evaluation can continue as planned and that there are new opportunities for learning as well. For example, the evaluation of the early childhood program described above could likely still continue since the core program components, hypothesized mediators, and target population have not been fundamentally altered. However, implementation researchers would want to pay attention to how the children’s overall learning context has changed, how changes in the mode of delivery affects program quality, and how planned content was adapted to meet the particular needs of parents during the pandemic. 

Implementation researchers can be good partners to program operators at this difficult time – both by being sensitive to the new constraints that programs face and by using their expertise to help programs document and learn from the adaptations they are being forced to make in response to the pandemic.