How Can Data Science Tools Better Represent Participant Experiences? A Conversation with Ahmed Whitt and Alissa Stover

Ahmed Whitt and Alissa Stover

The Center for Employment Opportunities (CEO) provides wrap-around support and employment services to people returning home from incarceration. While participant feedback was always important to their work, CEO was looking to better understand the range of their participants’ experiences and use it to improve the services they provide. MDRC’s Center for Data Insights (CDI) partnered with CEO and used data science tools and qualitative research to better utilize the feedback CEO was receiving from their participants.  

In this episode, Leigh Parise first talks with Alissa Stover, a former research analyst at MDRC and CDI. Alissa describes CDI’s partnership with CEO, the importance of big-picture thinking in data science, and their implications. Ahmed Whitt, the director of learning and impact at CEO, then explains the critical lessons that were learned.

Leigh Parise: Policymakers talk about solutions, but which ones really work? Welcome to Evidence First, a podcast from MDRC that explores the best evidence available on what works to improve the lives of people with low incomes. I’m your host, Leigh Parise.

In this episode we’ll talk about a research partnership MDRC has with the Center for Employment Opportunities (or CEO), which provides employment and other services to individuals who recently returned home from incarceration. CEO and the MDRC Center for Data Insights (CDI) work together to create data science tools that can more fully capture participants’ lived experiences in order to improve the services that CEO provides.

Today I’m joined by Ahmed Whitt, director of learning and impact at the Center for Employment Opportunities, and Alissa Stover, former research analyst at MDRC, to talk about the details of the partnership and what they learned. All right, Alissa, thank you so much for joining me. It‘s really great to have you on.

Alissa Stover: It’s great to be here, Leigh. Thanks for the invite.

Leigh Parise: Why don’t you start by telling us about the approach that you took to the work with CEO?

Alissa Stover: I think the biggest question that we were trying to answer in the beginning was, What was the relationship between people’s feedback about the program and their later employment outcomes? Does that have any quantitative relationship that you can measure [with] whether they’re employed after the program?

To answer that question, we used what text messages people were sent, how they responded (and so forth), along with their employment outcomes. We measured the relationship between people’s response patterns: whether they responded to the text or not; if they said something positive or negative; and whether they were employed 90 days, 180 days, and 365 days after the program ended. 

With that more classic data work, we learned a lot—I think we learned a lot of important lessons for the field generally. But we sort of hit a wall, where we realized that the quantitative data wasn’t going to answer everything. I think this is a testament to the unique way in which the Center for Data Insights works. You might think [in] a data science job, all they do is work with data in the classic sense of data, but I think CDI really pushes further and thinks about the bigger picture. What information from program staff can you weave into your analysis-- along with information from the participants directly -- to use for some sense-making?

So we did push into other types of data—more qualitative data—to answer the questions that were brought up from the first quantitative phase. We not only did one-on-one interviews with CEO participants to hear their points of view, but we also asked a Participant Advisory Council (or PAC) at CEO—that was comprised of both alumni and current participants in the program—how to conduct those interviews and how to think about the data results of the qualitative interview reviews (and then also the quantitative). We were really looking at the full picture.

Leigh Parise: That’s great, thank you. It sounds to me like you think this is not always the typical approach that data analysts, especially—who are more quantitatively focused—might take. It’d be good to have you say a little bit more about, Why go to participants for feedback? Why think about involving their Participant Advisory Councils?

Alissa Stover: There are multiple levels to that question. One, [involve participants] as a data source. I mean, if you want to understand how someone feels about a program or if they like it, you have to ask them. That’s where text message surveys come in. You ask, “How do you rate the program? Do you like the program? How can it be better?”

I think there’s another dimension of asking people: how you think you should do the research. When we were considering how to conduct the interviews, there were questions around, How should we ask specific interview questions? Is the wording right? Or questions around, How do you show up to these interviews to create a space where people are really willing to give honest feedback?

One of the things that we were trying to get at with this project was to understand why people weren’t giving feedback to CEO that was more negative. We were seeing really overwhelmingly positive responses to the text messages. Yes, CEO’s an amazing program, but we did want to push a bit further and wonder, Are they hearing from people who might have a less positive experience?

It wasn’t really obvious to say, “Okay, we’re going to go interview people—who are already reticent to share any negative feelings about the program—and then ask them again for these feelings.” We try to put a lot of thought into how to approach that issue. I think asking participants themselves how to create that space is really critical.

Then, finally, I think there’s an ethical component. An important element to consider, in the work that we do at MDRC and CDI, is that a lot of the people that are in the programs that we’re looking at have had maybe poor experiences with other researchers in the past. that has resulted in a lack of trust with researchers more generally. So maybe you show up into a room, people feel a great distance between themselves and you, and that will affect how they’re going to show up in those interviews.

The Participant Advisory Council really stressed that relational component a lot to us. They gave us really specific feedback: Show up to the interviews; don’t just read from a script; don’t be like a robot, be a human. Show that you have confidence in yourself; show that you’re showing up as a human being and not trying to extract information from us. Share a little bit about yourself. If you want us to share about ourselves, okay; even the playing field, share about you.

So I think just really being cognizant of that trust issue and not trying to get at it by making assumptions about what would create trust, but asking people themselves what would create trust for them. Because someone in CEO, they’ve come from a very particular experience of being incarcerated, and that has created some very specific experiences with trauma that I don’t think a lot of researchers ... at least myself, I haven’t had direct experience with that, so I don’t really know how to be responsive to that immediately. I need to ask people how to create that space.

Leigh Parise: You talked a bit about how you learned a lot from the data work—lessons for the field generally. Bigger picture, it would be great to hear you talk a bit about what you learned through the collaboration with CEO—so whether that’s the survey data or the advisory council or the interviews, I think people will be really interested to hear that.

Alissa Stover: Sure. I’ll start on the quantitative side. Again, we looked at that relationship between feedback and outcomes. I think another unique approach in this project was to not only look at the content of responses, but just whether someone responded at all. We found that if someone responded to a text message, it was associated with a 5 to 15 percent increased likelihood of them being employed 90 days out and 180 days out. I think it’s safe to say that people’s feedback about a program can be used to predict their outcomes in many cases.

I think that’s pretty major, because, for example—with a program like CEO—you might have someone toward the beginning of the program who, you send them a text, they don’t respond. It could be a signal that they’re disengaged with the program, they’re not having a great time. It means that that feedback data could be used potentially for an earlier intervention to then reach out to that person and say, “How can we do better? How can we support you more in your goals?” And in that way actively improve their outcomes.

I think that this feedback data is not only an important signal, an important way of measuring something like engagement, but it can be used operationally for decision-making in real time. Which again, I think is something that CDI is really about. It’s about not just taking data from a couple years ago and providing insights about this retrospective data, but maybe something more in real time, more adjacent to the decision-making point that the program needs to make.

The last thing I want to say is it’s not causal. We’re not saying anything causal about whether feedback causes outcomes; it was all correlational, predictive. It was looking in that way. I think that’s another hallmark, in some ways, of CDI: We’re, in some cases, trying to predict more, and that speaks to the real-time, decision-making-type nature of the insights that we try to provide as CDI.

Generally, with these texts, anyone who responded gave CEO a 10 out of 10, flying colors. They love CEO; nothing’s wrong. Again, CEO’s a wonderful program. We heard that in the text, we heard that with the Participant Advisory Council, we heard that with the interviewees. But every organization, every person always has room to grow, and it’s always helpful to get some sort of constructive or negative feedback. That’s where we felt like the text messages didn’t contain much of that constructive feedback. So we had to go directly to people. That’s where the Participant Advisory Council and interviews came in.

We really wanted to dig into the question of, How can CEO fully leverage the value of these data? We know the feedback data’s really valuable, it could be predictive, but how do we increase the variability? How do we hear from people who are not giving it a 10 out of 10—but maybe a 0 out of 10, or a 5 out of 10—and might have really important nuggets of information to share with CEO about how they can do better (maybe more specifically across interviews and also the PAC).

A theme we heard over and over again is “I’m busy. I’m going through a lot, please don’t waste my time. Don’t ask me for feedback, try to get me to give you feedback, and then just sit on it or don’t respond.” People also gave CEO a lot of grace. I remember one participant being like, “Listen, I gave this feedback around”—it was [about] a specific equipment that this person needed at the job site. They were like, “I saw CEO trying. They didn’t really get it right, but I saw they responded to my feedback. That meant a lot to me and then I continued giving feedback.” We heard other people say, “I shared this feedback with CEO, and I literally never heard back. I was like, well, then why did I even share this with you?”

There was a strong expectation that if someone gave this negative feedback, there would be not only individual follow-up—where CEO would go to that person visually and say, “Hey, what’s up? How can we be better?”—but then also on aggregate. They wanted CEO to look at—on a macro scale, across all of this feedback—what were the trends in what people were saying? So I think it was this very sophisticated understanding and expectation around how an organization would make decisions based on feedback.

I hope that people listening to this aren’t saying to themselves, “Oh, then I’m not going to ask for feedback. If I have to act on it, then shoot, I have to hold myself accountable to that.” You do, but I saw so much of this relational work that organizations can do, so you have to start somewhere.

You’re going to mess up and you’re building this muscle of general listening culture. You start there, but it is ultimately going to make the program better. I would bet on that with all my money; I don’t have that much, but everything I have. I really think that if organizations start on this path in a really genuine humble way and are really listening to people and just really trying and putting an effort in—people do see that, people appreciate it, and you both get better together. The organization gets better at what they do, people get better toward their goals—at CEO, people are trying to find employment.

Leigh Parise: Alissa, say more about how the Center for Data Insights works with other nonprofit organizations to improve their practices and programs—either the way that CDI approaches the work or the type of work that they’re engaged in with the programs themselves.

Alissa Stover: It is a data science shop, so CDI definitely approaches this work with a lot of technical expertise. We’ll—especially in the beginning of a project—primarily come in on that side of things. So maybe we’re working with administrative data, survey data, whatever. We’re looking at the data; we’re generating insights. But I think the special sauce of CDI is that deep history MDRC has with the programs. Not just on the quantitative side, but also the qualitative.

So we can come in and we can work closely with the staff. We understand something about implementation research; we know what it takes to turn insight into action. Also, the piece around ethics more generally. CDI has had a focus on doing data science ethically, and I think that’s super important in this space: all the privacy practices that MDRC and CDI has, but then also the general care around data management and the diversity on your team. Are people coming at this from different perspectives to keep an eye out for those privacy concerns or those human elements? I think all of those pieces together make CDI really special in this space, in my opinion.

Leigh Parise: Alissa, can you say a little bit ... okay, big picture, stepping back: The kind of work that we did with CEO, what do you think this adds to the field? What should be some of the bigger picture takeaways for people who are listening, who are either researchers, or programs, or people who are engaged in either similar types of work or working on similar content areas? What are some of the lessons or the additions for the field?

Alissa Stover: For programs, the top line is that this feedback data is really worth it. There’s been a lot of arguments on the ethical front. I think some organizations are like, “Yeah, this is the right thing to do,” but there’s been some hesitation around, like, “Oh, but what’s the relationship to these sorts of outcomes that we really care about or want to achieve?” So I think this just goes to show that you can really use these data for strengthening your program performance.

For funders, similarly: Fund it. This work is really promising. I think it’s very cutting edge and it holds a real shift in how we run nonprofits in the future, so don’t be late to this game. We’re all working to help people achieve goals in these individual programs. The more we can listen, the more we can work alongside people, the better we’re going to be able to achieve that, so don’t be afraid to put money in that direction.

For researchers: I’d say leading with humility. I’m a data scientist, I think sometimes that word really throws people off. I can very easily hide behind, “Oh, I’m technical and blah, blah, blah,” and I can drop jargon and show up in rooms like that. I think this work underscores that if you do that, your work’s not going to be as powerful, you’re not going to affect real life change. You have to show up humble, you have to be constantly growing as an individual human being, being vulnerable, relating to people differently and challenging yourself in your assumptions—just being really committed to learn.

That’s what we are supposed to be doing as researchers. We’re trying to find truth, whatever that is. It can live in places that you don’t expect; it can show up in ways that you have not even been able to dream about. That’s the beauty of the research process: Expect the unexpected. So just fully committing to that, and just being willing to say you’ve messed up or you don’t know, is key.

Leigh Parise: All right, Alissa, thank you so much for joining me today. This has been a really great conversation.

Alissa Stover: Yeah, it’s been wonderful. I love this work. I think it’s a really special project in CDI and exemplifies what CDI does, so really happy to share about it.

Leigh PariseI also spoke with Ahmed Whitt. Here’s that conversation now.

Ahmed, thanks so much for joining me today. I’m really excited to have this conversation with you.

Ahmed Whitt: Thank you. Yeah, I’m very excited about it, Leigh.

Leigh PariseGreat. Do you want to just start by telling people a little bit about who you are and who CEO is?

Ahmed Whitt: Sure. My name is Ahmed Whitt. I am the director of learning and impact at the Center for Employment Opportunities. At CEO, we provide immediate, effective, and comprehensive employment services exclusively to individuals who were recently returning home from incarceration. Right now, we work in 32 cities across 12 states in the U.S.

Leigh PariseGreat. Thanks so much. All right. Let’s start with, first, a big picture question for people. This work that you’re partnering on with MDRC, say a little bit about the big picture of the "why" behind asking participants for their feedback using surveys at CEO?

Ahmed Whitt: Sure. I think, big picture for us, everything goes back to humility in our services. We have expertise, certainly, in running our program—its effectiveness, understanding what works well for whom—but we certainly can never be the experts of individuals’ journeys back home. So we wanted to build a system that was flexible enough to support immediate needs or even put out immediate fires, but also provide enough robust data for us to change course on our program model or any big-picture things that we’re doing organization-wide that could better support the needs of returning citizens.

Leigh PariseGreat. And I’m wondering, was there a way that you were trying to get some of this feedback in the past? Or is this something that you were trying to implement as a new approach, to see what you could learn?

Ahmed Whitt: It's our feedback system, and the larger system, we’ve dubbed “Constituent Voice.” It’s a mix of SMS surveys, some anonymous surveys, and some other data collection done at work sites or in our offices. And I think it’s more of an evolution over time, where we began about 15, 20 years ago with a pretty straightforward survey—very much centered around a net promoter score. Then, over time, we’ve added other elements, including our Participant Advisory Council. So we jumped into feedback early, and we’ve been tweaking it and fine-tuning it ever since.

Leigh PariseThat’s great. I feel like that approach is probably something that a lot of different organizations can learn from, so thanks for sharing a little bit about that. 

All right. Let’s get into what you feel like you’ve learned. What did you learn through the collaboration with MDRC on the survey data, the advisory council, and the interviews?

Ahmed Whitt: When we think about the actual quality of the data we’re getting with feedback, there were a lot of lessons—mainly from the advisory council, also from the interviews, to a lesser extent. [There was] kind of a mismatch between how we perceive questions or how we perceive the intent of questions that we ask participants, and how they perceive both the questions and the program at different stages.

I would say, in summary, it is a matter of participants seeing the program as a bit more disjointed than we do as administrators or program staff. So when we were asking questions at particular aspects of either their introduction or their initial training (from when they work on work sites and then eventually get unsubsidized job placement), they actually engage with those different parts of the program differently. When we ask a broad question—“How do you feel about CEO?” or “How do you feel about staff?”—really, they’re only looking at where they are in the moment.

When we think of the validity of that feedback or our ability to extend the lessons from that feedback across the entire program, we needed to learn to be a bit more, let’s say, strategic about how we ask for feedback. And then also get a better sense of exactly what participants are looking for from us at these particular stages.

Leigh PariseGreat. That’s really helpful. Thank you. All right, so tell us a little bit more. How has what you’ve learned affected the way that you are making decisions or any actions that you’re taking at CEO?

Ahmed Whitt: I think one of the biggest changes that we’re hoping to make—we haven’t perfected it yet—is fully integrating feedback into what’s already been a pretty strong data-centered culture. Historically, feedback has been seen as something certainly of import, but maybe running on a separate track than our evaluation work or some of our program improvement activities. And I think there are really creative ways to interlock the two. From our Participant Advisory Councils, as part of this project, I think we found a nice system to get high-quality qualitative data on specific aspects of the program, as opposed to just the daily goings-on at a particular office or some of the really successful Participant Advisory Councils that have been centered around policy initiatives that we’re pushing nationally. 

Previous to this project, if there was a question of, “Hey, should participants be eligible for more transitional employment days or should we think differently about retention?,” we weren’t necessarily pulling together a specific PAC for that purpose. I think in the long run, this project has taught us a new way of having multilayered data on decision-making across the organization.

Leigh PariseI had to say that’s super impressive, because that’s really hard to do and clearly you’ve been thinking about this for a really long time. Hopefully you felt like you were well positioned to be able to make that kind of integration. I know that CEO has been a data- and evaluation-focused organization for a long time. What else is on your list when you think about “Here are the additional things that we still want to learn,” or “Here are some of the questions that we’re still asking”? What are the types of things that you’re thinking about?

Ahmed Whitt: Certainly, from the feedback specifically, there are opportunities to improve our system. The data—both the quantitative and qualitative data from this project—has opened up opportunities to better streamline both our collection mechanism and also how our staff interact with our feedback data from participants. On a larger scale, we’re really trying to get a better sense on how to balance the advantages of getting anonymous feedback from participants about sensitive areas—some of which are maybe beyond the scope of what CEO is able to do in a particular region—as we deal with private employers and regional policy.

So anonymous feedback, which we know is rich in certain respects, versus data that we can link with program participation and program outcomes. Because there is value in seeing exactly what the experience is beyond a pain point of this participant in this office, and how does that inform what percentage of the change is internal versus external. Or what percentage of the change can be found from better supporting our participant versus better training or broadening the perspective of our staff members. So that balance between data quality and the practicality of data are things that we’re also experimenting with. Feedback is such a big part of our identity, but we don’t want it to obscure what are best practices and what’s going on day-to-day at sites or with the relationship building that our program staff are doing with participants.

Leigh PariseYou’ve talked a bit about the advisory council and I think it would be interesting to hear a little bit more about that. Tell us, how did that come to be, and who sits on that, and how does that actually work to get some feedback from folks who are on that council?

Ahmed Whitt: Sure. Generally, we stand up advisory councils across particular regions or in particular offices. Prior to this project, they were very much centered on getting perspectives—of either CEO alumni or participants that have reached a certain stage of the program—about opportunities to improve the office; improve what we’re doing regionally (with regard to policy initiatives); or what we’re doing in terms of targeting particular employers or particular industries for new opportunities, either for full-time employment or apprenticeships.

So things that were pretty program specific or policy specific. And here, what we did was use that same mechanism to talk specifically about information that we’re gleaning from feedback data. With the MDRC team doing the extensive data analysis of all of our historic feedback data, what we wanted to do is stand up a participatory research model using the PAC, where we’re talking about, “All right—from a researcher perspective or an outsider perspective—here’s what we’re seeing in terms of trends of the data, here’s what we’re understanding about why people are or are not participating in our constituent voice initiative, and also here is how the data connects to the day-to-day goings-on of this particular office or this particular region.”

We really did a concentrated effort to use the PAC for a very specific purpose as opposed to what we usually do, which is having this running dialogue with participants for three, five, seven months at a time about what’s happening just generally at CEO.

Leigh PariseGreat. Having now done that for a little bit, are there things that you are thinking about doing differently or things that you think, This is a place where we got some especially interesting insights; we were even more excited about being able to have those kinds of conversations than we anticipated?

Ahmed Whitt: For us it’s certainly a doubling down on the idea of a Participant Advisory Council and not taking it for granted that there’s going to be a group of super-engaged CEO participants who are going to give us their honest feedback on everything. I think we learned that by doing a more thoughtful or more intentional recruitment that maybe individuals who don’t naturally have the loudest voices or the highest participation rates, if given the opportunity, still would like to be a part of it. We’ve learned a bit more about how to recruit for a diverse set of perspectives and impacts, but I also think there are opportunities for participants to have more of a say on things that we haven’t thought about, maybe opening [them] up for opinion: things such as priorities when selecting a new board member or thinking through candidates for a new board position.

Some of the changes that we want to make about advanced training, sometimes we pilot or we go from idea to pilot and then get feedback. I think there’s an opportunity to go ideation, simultaneously doing some more thoughtful participant advisory councils, more qualitative data collection—and then going [to] the pilot stage. There’s a lot of opportunity, I think, for experimentation with the PAC model in general.

Leigh PariseGreat. That’s super interesting to get to hear about and makes a lot of sense, I think. All right, so one question: Thinking about people who might be listening to this, what advice might you give to other organizations who are interested in being able to use feedback to inform the decisions that they’re making?

Ahmed Whitt: [A] few bits of advice: One, I would say if you’re not currently collecting feedback, just jump in and don’t worry about perfecting the mechanism for integrating the data into evaluation or even how to match the data with your CRM. I would start collecting the data as soon as possible and then start fine-tuning both your processes and the questions along the way. I think we’ve benefited a lot from data that we had collected, let’s say, 10, 12 years ago, [and] we know the questions weren’t asked in the best way and we had double-barreled questions and all that type of stuff, but the scale of our data helped a lot in fine-tuning the larger project research goals. The other piece I would recommend is really connecting with the larger community of organizations that are doing feedback work.

We’ve been funded from the Fund for Shared Insight and been a part of the Listen for Good initiative for years now, and each cohort of new organizations have brought a new kind of perspective, new insight, new energy to this larger initiative across the social sector around feedback. And I think there are aspects of the MDRC report and aspects of some of the presentations that we’ve done about this project that I can connect directly to another organization—and these are organizations of different sizes and working in different substantive areas. But really it has been a team effort in advancing this work. So as much as possible, I would connect with other organizations—even if they’re not in your area, even if they’re not [in] a geographic area or your project area—and really just learn from listening to other organizations.

Leigh PariseI think that’s good advice and some of what I hear in there is “Don’t be afraid; just go out. If this is a thing you think you want to do, it’s okay to just jump in.” But also the questions that you have or some of the challenges that you’re facing, there are probably other people, other organizations out there who have some of those same questions and same challenges, even if they’re not located down the block from you. So being able to connect with those organizations—you can stand to learn a lot from one another.

Ahmed Whitt: Certainly.

Leigh PariseDo you have any examples of something that you learned that was surprising or that you weren’t necessarily going in looking for, but actually was an insight that you got from the advisory council that now you’re thinking about how that might affect programming or might affect some decisions that you make as an organization?

Ahmed Whitt: Yeah. We learned that we, program-wide, have been underutilizing alumni voices. There were a lot of times—in the interviews with participants or our Participant Advisory Councils—where participants mentioned that, though they appreciated their job coach or their staff member, there were aspects of their [reacclimatization] post incarceration [where] they think they would’ve connected better with either the message or been a bit more motivated to follow through if they had more stories of people who’ve gone through the process (whether it’s the CEO process, whether it’s the job hunt). We previously hadn’t thought about new opportunities to utilize the expertise of former participants outside of our PACs (or Participant Advisory Councils). We thought we had this great model and it was working well—and it has been, but there are so many other ways or other stages of the program that we could also partner with alumni better.

Leigh PariseOkay. Ahmed, I know that CEO has been for years focused on getting some feedback as they can—doing surveys, you mentioned. I’d love to hear you say a little bit more about why this approach, now. It feels to me like there’s a much more intentional effort to really make sure you’re getting genuine feedback and input and advice from participants in your program. Can you say a little bit more about why now, and whether you’re seeing this across other organizations or across the field more broadly?

Ahmed Whitt: Sure. I do think we’re really in a moment right now for collecting feedback and integrating feedback into decision-making, both from the evaluation and impact perspective but also [from] our day-to-day running of the program or running of programs. I think it comes from two sources. I think participants across programs have been empowered—their voice has been needed and absent for years—and now they’re really leveraging their power and just saying what they want. It would be foolish not to be responsive to that.

The other end, I guess from a more macro perspective: Funders now are really helping to lead the charge, where we’re hearing that impacts aren’t just yea or nay in reaching milestones, but also the experience that participants have during the program. And I think without funders requesting that data or requesting those insights from organizations…I don’t know if organizations would feel like they can invest resources in not just collecting the data but using the data.

So I think we’ve been fortunate, again—from the larger coalitions that we’ve been a part of, many of which [were] led by Fund for Shared Insight—that funders as well as programs are starting to think more seriously about the quality of engagement or the quality of experience of participants through programs, not just their larger outcomes or their longer-term outcomes. And I think it’s kind of a chicken or egg thing; I don’t know if the participants made the funders change or if the programs are catalyzing participants to have louder voices. But all in all, it’s a net positive for [the] social sector and for end users.

Leigh PariseYeah, I think it sounds like a change that’s headed in the right direction. I hope that you continue to feel that way and then you continue to get support for that. Ahmed, thank you so much for joining me today. It was really great to have you here and have this conversation.                                                        

Ahmed Whitt: Thank you. It’s been great.

Leigh Parise: Thanks so much to Ahmed and Alissa for joining me. Alissa coauthored a brief on putting lived experiences at the center of data science, which summarizes the lessons from this partnership. To learn more about this work, visit mdrc.org.

Did you enjoy this episode? Subscribe to the Evidence First podcast for more.

About Evidence First

Policymakers talk about solutions, but which ones really work? MDRC’s Evidence First podcast features experts—program administrators, policymakers, and researchers—talking about the best evidence available on education and social programs that serve people with low incomes.

About Leigh Parise

Leigh PariseEvidence First host Leigh Parise plays a lead role in MDRC’s education-focused program-development efforts and conducts mixed-methods education research. More