Reflections on the Evidence-Building Movement

Virginia Knox and Naomi Goldstein

In this episode, Leigh Parise talks with MDRC President Virginia (Ginger) Knox and Naomi Goldstein, the former Deputy Assistant Secretary at the Office of Planning, Research, and Evaluation (OPRE)  at the Administration for Children and Families in the U.S. Department of Health and Human Services. Goldstein is also a member of the MDRC Board of Directors. They reflect on their experiences in evaluating programs and policies, the growth of the evidence-building movement, and future considerations for the field.

Leigh Parise:  Policymakers talk about solutions, but which ones really work? Welcome to Evidence First, a podcast from MDRC that explores the best evidence available on what works to improve the lives of people with low incomes. I'm your host, Leigh Parise. In this episode, I'm joined by MDRC President Ginger Knox and Naomi Goldstein, the former deputy assistant secretary at the Office of Planning, Research, and Evaluation (or OPRE) at the Administration for Children and Families in the U.S. Department of Health and Human Services—where Naomi served for over 20 years. MDRC and OPRE have worked together for decades, with OPRE contracting MDRC for evaluation and technical assistance aimed at improving outcomes for individuals and families that the Administration for Children and Families serves.

I will say, we’re so pleased that Naomi also recently joined the board of directors at MDRC. Ginger, Naomi, welcome. It's really a pleasure to have you both on Evidence First, and I'm super excited about our conversation today. I'm really hoping that we're going to be able to take advantage of both of your experience and evaluation to reflect on the growth of the evidence-building movement over the past three decades. Let's just start by asking you to talk a little bit about your own personal paths into the evaluation field. What really got you interested in doing this kind of work? What were your experiences before you dove into evaluation? Naomi, let's start with you—and then, Ginger, we can hear from you after that.

Naomi Goldstein: Well, I've moved around a lot and had a lot of different jobs. I have a degree in public policy, and I will say my public policy training included very little about evaluation. I worked briefly at HHS in the '90s. I worked at a private firm. I did a study of workplace violence in the postal service. And then I arrived at ACF and OPRE and I stayed. I really found the work continually challenging and stimulating and satisfying. It incorporates several different goals, each one of which is compelling. There's the helping mission of ACF: helping people who are low income or vulnerable in other ways. And then there's the learning mission of OPRE: learning how those services can be more effective. I just also found my colleagues wonderful to work with, and so I stayed for 20 years.

Leigh Parise:  Great. Thank you. Ginger, what about you?

Ginger Knox:  Well, from a pretty young age, I was interested in public policy and how it can be deployed to improve people's lives, especially people who had fewer resources or fewer opportunities. I actually majored in public policy in college, and I came to MDRC as a research assistant. Like Naomi, I've been in this field and in similar roles for a long time, and I was just really fascinated by everything that went into program evaluation, right from the beginning. I got an MPP and a PhD in social policy. I worked for a couple of years in a public agency in New York City. But then I've spent most of my career, post-PhD, at MDRC.

I think it's really kept me interested throughout my whole career, the same way as Naomi mentioned. Tapping so many different kinds of skills, [like] creativity, when you're designing a new study or a new intervention to solve a problem. The kind of analytic thinking that it taps—interpretation of what you're learning—and collaborating with really thoughtful, smart people who you're just constantly learning new things from. And then the idea that we could also be directly working with people who are delivering services and making a difference in their local communities, it was just such an amazing combination of activities that we get to be part of. We're always learning and pushing in new directions and contributing to change at a large scale. All of those things have just really kept me in this kind of work for my whole career.

Leigh Parise:  I love it and I will say, Ginger, I've been here now for 12 years and I would second all the things that you mentioned too. Naomi, let's go back to you. For a really long time, OPRE has been a leader among federal agencies for building learning agendas and showing how evaluation and research can actually inform policy and practice. I thought it was interesting that you didn't start with evaluation. You said, like, "Well, my training really wasn't there. It was really more on the policy and practice side." So I think it makes a lot of sense to me that you were bringing that lens. And in fact, during your time at OPRE, I know you developed an influential set of principles governing the agency's evaluation work. Can you say a little bit about those?

Naomi Goldstein: Happy to. ACF established an evaluation policy in 2012 and it set out five principles to govern our work. Those principles were rigor, relevance, transparency, independence, and ethics. A few years ago, OMB established evaluation standards with a similar set of five principles. Since the Evidence Act requires federal cabinet [agencies] to establish evaluation policies, OMB requires those agencies to incorporate those principles. So they're pretty widespread at this point. And rigor: It really means getting as close to the truth as possible within financial and practical constraints. Sometimes, people use rigor as kind of a code for random control trials, but really rigor applies to all types of evaluation: descriptive evaluation, qualitative evaluation. It's important to be rigorous no matter what kind of methods you're using.

Relevance is just as important as rigor. There's no point in designing an elegant study if it's not useful to policymakers and practitioners. Transparency is a value in its own right, but it also kind of runs defense for the principle of independence, and also supports the principle of relevance.

The policy at ACF states that under transparency ACF will release results regardless of findings and will present comprehensive results, including favorable, unfavorable, mixed, null, and confusing [results], whatever they are. It also states that ACF will release results in a timely fashion. I'm emotionally attached to all five principles, but the next one, the principle of independence, is the one that makes federal evaluators sit up and pay attention. Because being responsible for producing objective research often means providing unpopular, unwelcome information. Maybe somebody's pet project doesn't have any effects, or maybe service costs more than expected or participation is very low. And these unwelcome results might become available just when Congress is considering reauthorizing the law. So objective research can be quite controversial and that's why independence and transparency are so important.

Then the last principle—of ethics—is critical for any kind of research involving humans. Sure, it's also true for research involving animals, but that's not my field. Under the principle of ethics, ACF's policy states that the agency is committed to both the letter and the principle of all of the rules governing human subjects research.

Leigh Parise:  Great, thank you. Alright, so I have to ask—for people who are listening and think, "Well, yeah, of course, those principles make so much sense"—wait, are people doing work that isn't adhering to those principles? Say a little bit about when these were developed, what was the impetus for that?

Naomi Goldstein: That's a great question. ACF was not the first federal agency to establish an evaluation policy. The U.S. Agency for International Development had a policy that predates AFC's policy. The policy drew quite a bit on the federal evaluation roadmap of the American Evaluation Association. So the principles were not new ideas or groundbreaking in any way. I think one contribution is that the ACF policy puts their principles forward in a pretty pithy way, at least by government and academic standards. And the process of articulating an agency's principles is useful in itself. So it forced us to really crystallize what were the core values that drove our work. Over time, it turned out to be really useful to have them articulated in a document that we could use to orient new staff, orient new leadership, develop a set of shared values across the agency. It's been useful in many ways.

Leigh Parise:  Great, that's really helpful. I could see how [it’s useful] to have that shared understanding in a document that people can see. And I will say, I did go to the website and watch a minute-and-a-half video about this too, which is cool. I applaud having things like that that are so accessible. Alright, so, Ginger, thinking about MDRC: In your experience, what really resonates with you about these principles and how they fit with the work that we do here?

Ginger Knox:  MDRC actually has a set of research principles posted on our website as well. People who've just heard Naomi's explanation would see that there are a lot of parallels between the way we think about the principles underlying our work and how OPRE thinks about it. I think that makes sense because I really think about research as sort of…sort of operating under a social compact in which we've all—[the] people involved, whether it's funders or researchers, other academics, practitioners—we're all kind of agreeing to some rules of the road, including criteria for how we're going to decide to answer the questions that we want to answer, how will we know if something's effective or not. There's statistical rules of the road as well.

The principles we're talking about here are a broader compact about how we're going to approach the work and how we're going to work together, given that we're all living under somewhat different incentive structures. Having our principles written down gives us a common starting point, so that we're all operating in the same enterprise, basically, in the broadest sense. I remember when OPRE's principles came out, I really appreciated how they thought about rigor, for example, as broader than just doing sort of a causal impact study in the highest quality way. Because rigor and high-quality work are important to MDRC, no matter what kind of work we're doing—whether it's a descriptive study or an implementation study. It helped us think about rigor, I think, in a broader way, and I've always appreciated that.

Leigh Parise:  And when you think about the role that MDRC can play in helping federal agencies achieve their goals when it comes to evidence building—what would you say about that?

Ginger Knox:  I think the way we think about our role in helping federal agencies achieve their learning goals is that research and evaluation can really have two different purposes. One is accountability for whether a policy or a practice is achieving the goals that the agency set out to achieve. And in that case, we might be playing a kind of independent evaluator role, kind of an honest broker of what we've learned and holding an agency accountable for its own goals. A second purpose of research, though, is about really just answering the kinds of questions that an agency needs to know answers to to achieve its mission. So more of a broad learning agenda about the nature of the problems that their participants might be facing, or how implementation is going along the way.

So it's not always an accountability kind of impact question, but sometimes a broader learning agenda or even helping the agency use its own data well to answer questions that their own staff want to be answering.

Leigh Parise:  Alright, so an important milestone—admittedly, this group on the podcast today is a little bit biased—but an important milestone was the passage of the Evidence Act in 2018. That grew out of recommendations from the commission on evidence-based policymaking and the provisions of the act seemed like they align really well with OPRE's mission. Naomi, it'd be great to hear from you: What do you think are some of the most important features of that legislation and the planning that has come after it?

Naomi Goldstein: I think one really critical point is just that the act itself elevated the status of evaluation activities and highlighted expectations for the use of evidence in decision making. It also required cabinet agencies to establish evaluation policies and annual evaluation plans and long-term learning agendas. All of those requirements really focus not just attention but real thinking and planning around evaluation activities. I think those are all very important. The act also establishes a governmentwide council of chief evaluation officers, which allows agencies to learn from each other—and, again, gives these activities status, that that council is very similar to a council that has existed for many years for statistical activities.

 It builds the infrastructure for evaluation activities. I also want to say another word…you asked whether anyone would object to the five principles in the ACF evaluation policy. The reason that independence and transparency are such important principles is because yes, some people—particularly, but not exclusively, political appointees—have at times tried to suppress research findings or change the description of those findings. Sometimes that is not exactly intentional. People sitting in a particular seat, with a particular role, may have unconscious biases, and that's why it's so important that the evaluation activities be carried out by independent evaluation offices. There's some protection and contracting function as well, because the contractors are independent. And then sometimes it is intentional: "You know, we don't like this. We don't want this to see the light of day."

Leigh Parise:  Well, right. Hopefully, the greater focus on evidence in the Evidence Act will also be helpful with that too. From your time at OPRE or your broader experience really thinking about the role of evidence, what are some of the challenges that remain for being able to actually make the principles that are behind the Evidence Act real and things that really are driving what's happening in government?

Naomi Goldstein: Well, the challenges are perennial. And, somewhat to my surprise, a lot of the challenges have to do with human relations. Evaluation—and science more broadly—is a human enterprise and does depend on relationships. Program officials [and] policymakers have different worldviews, operate on different paradigms, operate on different timelines, are subject to different pressures, compared with the evaluation staff. Building and maintaining those relationships, building trust and a sense of common mission, is an effort that requires serious investment and is always worth focusing on. I also want to mention that the Evidence Act includes many provisions related to data transparency and data quality. Those are not so much my strengths, so I want to toss it over to Ginger to comment on those.

Ginger Knox:  I think that is one of the big challenges that the legislation, and the work that's followed the legislation, is really taking head-on: how to make federal data more accessible and usable, whether by researchers or others doing analyses, and in ways that obviously would protect people's privacy.

When I think about the work we do, collecting primary data to learn about people's outcomes is the most expensive part of the work that we do. And [drawing] on administrative data from agencies—by having secure data systems like the one that's being piloted right now at the National Science Foundation—is going to make an enormous difference in streamlining the kinds of questions we can ask. Being able to match data across agencies so we can look at people's outcomes across different domains instead of being siloed within a particular existing data set. All of those things are really…the Act actually speaks to a lot of that and it's an important part of the work that's currently going on.

Leigh Parise:  Yeah, I'm really glad that you highlighted that. It feels like issues around secure data and being able to ask—but even more importantly answer—relevant questions using data in a secure fashion are going to continue to be really critical. It's good that there's additional attention being paid to that now.

I want to turn us a little bit, because obviously, everything that we're talking about is about actually being able to improve policies and programs to better serve people and make a difference in their lives. Ginger, I know that MDRC, like OPRE, is really interested in building bodies of evidence rather than just conducting one-off studies, because it feels like our ability to actually inform policy and improve on-the-ground program performance is much better when we're able to do that. Could you describe an example of how that works?

Ginger Knox:  Sure. Building bodies of evidence really has meant a couple of different things at MDRC. In some cases, it means designing each study to address unanswered questions from a study that came before. It can also mean trying out an intervention, first in one or two places, and then—if things are working well—replicating it across multiple settings to see if it can be adapted and if the findings can be replicated, so that we're really confident in the results we got. That's another way of building a body of evidence. Another way I think about building bodies of evidence is—as we increasingly realize that implementation infrastructure is really important to scaling up what we're learning—building bodies of work also kind of means making the tools available that we might have used in learning whether something works and how it works, so that other people can use the tools and the approaches that were used to support the implementation along the way.

I think we increasingly think of that as building the field and building the body of evidence-based work that we're aiming for. An example near and dear to my heart is our early childhood work. I think about 15 years ago was when we started doing studies in preschool about how to build high-quality teacher practice and support children's development in preschool, because we realized that there was a big expansion coming in preschool funding, pre-K funding. And we thought people are really going to need some answers to what they should be doing in those classrooms as the capacity of the pre-K system expanded.

We actually started by talking to pre-K teachers, preschool teachers, and administrators about what their biggest challenges were. They often talked about classroom management and about how children's behavior could get in the way of other kinds of learning. Our first study was in what's called Foundations of Learning. It was in the area of classroom management and helping build children's social-emotional learning. It was also sort of theoretically driven because Pamela Morris, who led that work—she and her developmental colleagues really felt that social-emotional learning was an important foundation for any other kinds of learning that go on in early childhood. So that's another reason that we started with social-emotional learning.

We learned a lot in a two-site study in New Jersey and Chicago called Foundations of Learning. Soon after that, OPRE wanted to look at different curricula that can be brought to bear in preschool and in Head Start. And so we went from a small-scale starting study to a multisite national study called Head Start Cares, [which] did a really interesting job, I thought, of testing three different curricula—or three different approaches in the classroom—that all represented different theories about how children learn at an early age. We weren't just testing specific curricula, we were also testing different developmental theories about what might really support children's development in preschool. That's another example of building a body of evidence; [it’s] going beyond testing just an individual curriculum or approach to really trying to think of designs that can get at theoretical questions that helped build the field and build the basic literature at the same time.

Leigh Parise:  Great, thank you. I love that example. Knowing how difficult it is to be able to be in those classrooms and collect that kind of information, too, and figure out how to get one study to build on another is something I think that also is really important.

Ginger Knox:  Actually, building on what you just said, Leigh—how hard it is to collect the data needed in preschool classrooms—there really is not an infrastructure in place for good measurement of early development in the United States. Another example of field building is we're right now working with the Gates Foundation to consult with teachers, parents, and other community members to develop some equitable approaches to assessing how children are doing in early childhood that will address this need for really culturally competent ways of measuring children's early development. A side benefit is that studies like the ones we try to do should hopefully have better, less expensive ways of measuring—less onerous ways of measuring—how it's going within the particular intervention we're testing.

Leigh Parise:  Right, yeah. I love that you started with 15 years ago, but immediately, just brought us to today and some work toward the future. So certainly building a body [of evidence] over time there. Naomi, any thoughts about that example, or is there one of your own that you want to just say a bit about?

Naomi Goldstein: I would just reinforce some of the things that Ginger said. There's really no other area of learning where anybody would expect a one-off study to provide the answers. Nobody expects [that] a study is going to provide all the answers about cancer. It stands to reason that we need to build portfolios of evidence. Learning is incremental, and I think Ginger's comments really highlighted the point that there are different kinds of learning needed in a portfolio. We need to constantly be improving our measures and understanding how programs are implemented. And then because evaluation of human services programs is a human endeavor, time marches on and society changes and context changes. So studies become out of date and you want to find out if what we learned 15 or 20 or 30 years ago still holds in the current context.

Leigh Parise:  Naomi, I appreciate that note on how contexts are constantly changing, and I’m going to take us into the next topic I think we want to talk about. Contexts are changing; we need to be out in the world, both sharing what we're learning [and] hearing the kinds of questions that people have, partly to figure out if what we have learned still applies or if there are new questions that come up. I know that both OPRE and MDRC work really hard to disseminate findings from their work. It would be great if you could both talk about how MDRC, Ginger, and Naomi, in your experience while you were at OPRE—how have you approached this goal of communicating what can be pretty complex findings in ways that are useful and relevant and timely for the audiences that we're trying to reach?

Naomi Goldstein: Well, the first thing I would say is that it's important to take this seriously and bring the same level of effort and creativity and rigor to dissemination as we bring to developing studies. It's important to think about your intended audiences from the very beginning in planning the study. It can influence what questions you include in your study, and in planning the products that you'll develop from your study. It's important to produce the traditional long report with all the details about methods. That's important for transparency and accountability, and so that experts in the field can review and comment on the methods and the findings and build on them. But it's also important to develop other ways of sharing the findings, tailored to specific audiences.

At OPRE, we work to try to understand, What are the ways in which different audiences prefer to receive information? Who are the influential intermediaries that it would be important to make sure know about a given study? We tried different types of communication, like podcasts, or blogs, or videos, or short reports, or interactive online stuff. It's an area with a lot of scope and possibility, and it’s incredibly important. And it fits under the heading of the principle of relevance in the ACF evaluation policy.

Leigh Parise:  Nice connection back to where we started. I love it. Ginger, you want to say a little bit about how MDRC thinks about this?

Ginger Knox:  Yes, and I appreciated what Naomi said. I think we think of dissemination really broadly as helping achieve the part of our mission [that] is making sure our work is used to make a difference in the world. So we totally agree with Naomi that we need to think about different audiences and what kinds of short documents or podcasts or infographics or videos might be important supplements to a more traditional written set of findings. But I also think…I just think conceptually that strong dissemination also includes things like making sure that people are learning the lessons we're learning along the way, and that we don't wait until the end of a study to reveal what we've learned.

That actually goes in two directions. It means the people we're working with in a system or in a program should be getting some feedback from us along the way, so that they're learning alongside us during a study. And it means letting outside audiences have a peek into what we're learning along the way. Because, as Naomi said earlier, the timing of the final findings from a study doesn't always coincide with when a decision needs to be made. People are making decisions all the time. I think we've really learned over the years [that] people appreciate hearing as much as they can about what we're learning as it unfolds. And even when there's more to come, people understand that but appreciate learning along the way with us.

Then a final way I think about dissemination—which is, again, thinking very differently because of the context we're in—decisionmakers often want information interpreted for them as they're actually changing the system that they're part of. So that might look like convening administrators who are changing a community college system and helping them know how our findings might inform what they're trying to do in their setting—or even developing a research partnership, so that we're helping people use their own data over several years to use evidence to inform what they're doing, so that our skills and knowledge can help them answer the questions they want to answer.

That's another form of disseminating evidence and disseminating what we've learned in a completely different format than what you might think of traditionally as dissemination.

Leigh Parise:  Right, because it really could be that you have this information and it’s really critical, but maybe it’s not going to get used. 

Naomi Goldstein: I think evaluation of social programs is an area where, if you build it, they might not come. From an evaluator's perspective, we produce information and put it out in the world and hope people use it: "Here's a finding. Go use it." But from a decisionmaker's perspective, the world looks very different. And from a decisionmaker's perspective, it's more like, "Well, here's a decision. How can you help me with information?" And those two—providing of information and the demand for information—don't always line up. It is really important for the producers of information to try to think from the perspective of the intended audiences and provide information in ways that are genuinely useful and usable.

Ginger Knox:  And I would just add to that—I mean, it's really great there's this growing field of research on research use that your successor, Lauren Supplee, has been part of. W.T. Grant Foundation and Pew and others are really trying to build this real grounding in, "How is research used in the real world and how should that shape the way that we approach research, so that people really can use it to inform their decisions?"—given that, as Naomi said, decisionmakers are thinking about how they use information quite differently than learning from an individual study.

Leigh Parise:  Yeah, I love it. I think this idea of really putting yourself in the shoes of the decisionmakers is really key. And then, Ginger, this recognition that these decisionmakers are having to make decisions all the time and so let’s…of course we don't want to share our findings prematurely, but giving them the best information along the way, as we have it, feels really important.

Research use, and really thinking about how to make sure that what we’re sharing is relevant and timely for people who need to make decisions, is one of the things I think we’ve talked a lot about in positive ways in recent years. I think another thing in recent years that we've been talking more about is [that] the policy evaluation field and government agencies have been looking to focus much more explicitly on equity in the building and use of evidence. Engaging members of affected communities is one of the key elements of this effort. It would be great to hear your thoughts on this kind of work.

Naomi Goldstein: I think this is a really important and positive trend. There is a power imbalance between the people carrying out studies and the people who are studied, and it's incumbent on those who hold the power to make the effort to be inclusive—more in terms of carrying out research with rather than research on. Community-engaged research is a very important approach, and my former office convened a meeting on the topic and has initiated some studies specifically on how better to carry out community-engaged research. There are also many technical considerations in carrying out research with a focus on equity.

There was an executive order on using the federal government to advance equity in 2021, and there was a memo on scientific integrity in the government in the same year. It's interesting to see that the executive order on equity explicitly acknowledges the importance of data in equity. You need to collect data on different groups if you want to understand how different groups are affected by government policies, or the needs and strengths of different groups. The memo on scientific integrity clearly recognizes the importance of engagement in promoting scientific integrity. They're mutually reinforcing.

Ginger Knox:  At MDRC, the way I think about this is, we've always been aware of the importance of understanding the perspectives of staff and participants in the systems that we're studying—or what they see as the problems in the systems that they're part of—before we start designing a new intervention or a new solution. But as Naomi said, I think we're increasingly aware that there's a lot more to it than that. If we really want to be informed by members of affected communities, we have to think about the power imbalance between researchers and the people who are involved in a program or in a system.

If we want policy to really improve people's well-being, we need to know how they define well-being, and not how a researcher defines well-being—or what questions do they think we should be asking when we design the study. So giving folks input, not just into the intervention itself or the system they're part of, but really into the study design and what it's going to accomplish. We've been doing some really interesting things, moving in that direction. One is that a set of our staff are piloting what we're calling a council of lived experience advisors for our criminal justice work, as a starting point. That's going to give some community members who have experience with the justice system an opportunity to get to know our work in that area, and to weigh in on what they think will be most useful for us to pursue.

If that goes well, we might try that out in some other domains of our work. I think it's going to be [a] really interesting and important way to live this principle of bringing in more participant voice. We're trying out different types of participatory methods in our work where folks who are affected by the system we’re studying might actually participate in the data collection and really…it's an interesting way of building their own capacity to appreciate and learn about science and evidence by being directly part of something that might affect their lives.

As I mentioned earlier, we're doing this work with the Gates Foundation—[an] equity-oriented approach to building measurement in early childhood, so that we're consulting broadly with lots of people who would be affected by, and have opinions about, what it means to have healthy early child development. And hopefully giving those community members the tools, in the end, that they can use to learn how children in their community are doing.

Leigh Parise:  I love those specific examples and I think, in what you’ve both just said, there's some good pieces of advice or things to be thinking about for people doing the work. I want to actually end on another topic that I think is something people will wonder about, and where you can really give some insights here. A lot of studies—even most studies, maybe—take a really long time to carry out. Building a portfolio of studies certainly takes even longer. How do you think the field can really ensure that studies are designed in a way that they're going to address questions of enduring interest and continue to really be meaningful and important for people?

Ginger Knox:  I'll give just one call back to something I said earlier, which I think is really important, and that is designing a study so that they're answering questions about how and why something works, or how and why something affects its participants. So we're learning about the underlying mechanisms or the underlying theory of how someone's behavior is changing or how a policy is affecting them. That gives it an enduring, lasting effect in the field because people can take that information in lots of different directions, depending on how the world evolves in the future. You're not just testing one particular program model with a yes-or-no answer, but you're really learning about the underlying ways that this policy might affect people that can have a longer effect.

Naomi Goldstein: I totally agree, and I would add that building studies with a rich set of questions, the way Ginger described, helps to ensure that the study produces valuable information regardless of how the results turn out. You want to be sure that after you've invested a lot of money and a lot of time in a study, supposing the intervention you're so excited about turns out not to have impacts, you still want to have learned something and [moved] the field forward. I also wanted to say there's often quite a degree of consensus about what are the important next questions in a given policy area. The importance of engaging affected communities, and consulting broadly and developing evaluation priorities, can help to ensure that the questions being addressed are important and are broadly supported as important and useful questions to answer. And the requirements in the Evidence Act—to develop annual research plans and long-term learning agendas—really reinforce this.

Leigh Parise:  Right, right—and I think a lot of the points that you’re hitting on feel like just the right ones to reinforce. You are ending us on an optimistic note: that the Evidence Act is going to be one of the things that’s really going to help us get there, and it’ll be exciting to see, over time, how the plans really develop.

Naomi, Ginger, thank you so much for joining this conversation today. It's really been great.

Naomi Goldstein: Thank you. It's been a pleasure.

Ginger Knox:  Thanks for having us.

Leigh Parise:  It was really nice to get to step back and think about how far the evidence movement has come, and I think it’s really exciting to think about what’s ahead as the Evidence Act continues to shape research work moving forward.

Did you enjoy this episode? Subscribe to the Evidence First podcast for more.

About Evidence First

Policymakers talk about solutions, but which ones really work? MDRC’s Evidence First podcast features experts—program administrators, policymakers, and researchers—talking about the best evidence available on education and social programs that serve people with low incomes.

About Leigh Parise

Leigh PariseEvidence First host Leigh Parise plays a lead role in MDRC’s education-focused program-development efforts and conducts mixed-methods education research. More