Using Data-Driven Strategies for Program Improvement

Headshot of Melissa Wavelet

Government agencies work hard to help the people they serve, whether it’s helping individuals find jobs or improve family well-being. But despite best efforts, some participants still don’t succeed. What are some ways government agencies can improve services and ensure participants remain on the right track?

In this episode, Kate Gualtieri, MDRC’s Director of Strategy, talks with MDRC Senior Fellow Melissa Wavelet, the former director of the Office of Performance and Strategic Outcomes in the Colorado Department of Human Services, about her experience implementing a variety of data-driven strategies that help city and state government agencies meet their goals and improve the lives of the people they serve. They also discuss Melissa’s work at MDRC on the TANF Data Collaborative, a new initiative sponsored by the Office of Family Assistance and the Office of Planning, Research, and Evaluation in the federal Administration for Children and Families, created in an effort to accelerate the use of Temporary Assistance for Needy Families (TANF) data for program improvement and evidence-building at the federal, state, and local levels.

Kate Gualtieri, Host: How can we make government programs more effective and efficient?

In this episode, I talk with Melissa Wavelet, the former director of the Office of Performance and Strategic Outcomes in the Colorado Department of Human Services, who is now a Senior Fellow at MDRC.

Melissa and I discuss her experience implementing a variety of data-driven strategies that help city and state government agencies meet their goals — whether it be figuring out how to get more food or assistance to the people who need it most, or more kids ready for kindergarten.

These strategies are typically referred to as “performance management” or “performance monitoring” or even “outcomes tracking,” but they’re really just fancy words for a very simple idea — how to use data to show whether a program or service is doing well. 

Melissa, thanks so much for talking with me today.

Okay, so first we want to talk a little bit about government improvement. And I'm wondering, Melissa, at a time when government is really under scrutiny and so many of us are wondering what we even get from our city, county, state, or federal governments, can we make it better?

Melissa Wavelet, MDRC Senior Fellow: Most definitely, yes. I am a firm believer that there are many people like me working in government who are committed to making it more efficient and more effective. And, in fact, if you look around, there's a wide variety of strategies that government and professionals are using to make it get better.

One that's increasingly common is the use of performance management, or performance monitoring, or outcome tracking — there's a lot of different ways to describe it. But at its essence, what it looks like depends a lot on a variety of things. And I would argue, more importantly, who's leading the organization, and how they do that, and the culture for learning and curiosity that they're leading and helping to create.

So, in some states that means that they use a dashboard, or regular performance meetings, or some type of report card or score card, while others use performance-based budgeting and contracting. I would argue there's just no one way for a leader or manager to get their organization better. So, it can mean a lot of things like getting more money, or getting a clean audit, or getting approved policies. But what's most important, I think, for all of us is that we stay focused on getting better outcomes for the customers and the families that we're serving.

Gualtieri: It's really refreshing to hear that not everyone thinks the government is beyond repair — so thank you for that.

I was wondering, when you think about your career, how have you approached improving government?

Wavelet: Sure. I've had the privilege of working in New York City government under Mayor Michael Bloomberg, and then Wisconsin and Colorado state governments. My primary job in each of those agencies was to figure out how to get more people to work, or more people the food and cash assistance that they so desperately needed, or more kids ready for kindergarten and keeping them safe in their homes.

To do that, I was on teams that believed firmly in data as the path forward — as data sort of serving as a kind of flashlight to illuminate corners, dark corners, or dark rooms.

While I worked at New York City Small Business Services, we developed a data system that collected what mattered most to job seekers, both unemployed and under-employed New Yorkers, as well as employers, and used that data to try to create strategies that increased the number of New Yorkers going to work.

In Wisconsin, it was a bit different. It was in the Department of Children and Families which had just been created in 2008 by then Governor Doyle. Executive leadership at the time adapted a strategy that, historically, was used mostly in law enforcement, it's called CompStat. I was asked to create a softer, gentler version of CompStat, and we actually called that KidStat. Imagine instead of precinct commanders behind a podium asking their sergeants about crime statistics, we had our executive director — who's a social worker — asking his managers about child welfare and child care statistics.

The common thread through these conversations was that they are based in data and they're trying to show whether the services or the programs are performing well regardless of who's doing it. So, it could be a state employee or a cop, right? Or a non-profit, or a county. So we've asked a lot of questions month to month and the performance was on a chart or a line, line graph, a bar chart, and then we'd say, "Okay, what's going on here? Why is it getting better? Why is it getting worse?"

And then I had the opportunity to move to Colorado — and again in the Department of Human Services with many of those same programs. And they were already operating a stat approach that they called C-Stat and that was going on for two years at the point that I arrived there. And my goal was to really boost the foundation, and one of the things that I was able to do was incorporate research and evaluation evidence into the stat process. And, in fact, I was a consumer of MDRC studies and a user of the evidence that they generated over the last few years.

Gualtieri: So that's really great. If you had to break it down for us, what are the essentials or, say, the key components from all these experiences?

Wavelet: To be honest, to build or refine a performance stat approach [or] if you are choosing a stat approach as the path that you want to walk down I, strongly recommend a book written by Dr. Behn at Harvard University. And this is a shameless plug, but it remains the book that I go back to over and over again. When someone says, "What are the key principles or the common principles in doing this right?" He has compiled those keys strategies starting with the New York City CompStat example and then bringing it up to date. And I'll give you the short-hand version of those common principles. There's about four of them. One is figuring out what matters most to you, your governor, your mayor, whomever your customers are, and defining success for your program. Then, secondly collecting the data that enables you to measure that success and understand why you may or may not be achieving it.

So, one, what matters most. Two, how do I know whether I'm achieving what matters most. Three, talking about the data, the measures and those targets or goals — if you set some, am I getting closer to those goals or targets? And if not, why not? And fourth, you just keep doing it, and keep doing it. And continuous improvement implies exactly that — continuous — and you may hit your target, and you may even then adjust your target upward or downward if you still have hope for more improvement.

Stepping back, the way that I think about these activities that I've just described — which is collecting data, analyzing data, making decisions based on data, and then acting on that ­— they're routines. And typically we want to try to institutionalize routines if we think they're valuable, if we think they're effective. And institutionalizing these routines is possible with some discipline, perseverance, and even, you probably need a little bit of luck — and they may continue over time and they may even become sustainable for your organization.

Gualtieri: Perfect, thank you. So, tell me: When you think across all of that, do you have key lessons from the work that you've been doing?

Wavelet: Do I have lessons, Kate? How much time do you have? Of course, of course….

Here are a few. So, my experiences in these different settings over the last 15 years....First and foremost, I've learned that government, especially those services for our most vulnerable families, can indeed get more efficient and more effective. As I wrote in the C-Stat case study that IBM published, there were over 203 C-Stat measures over seven years of operating C-Stat, about 130 had demonstrated some sort of progress over that time period.

So that's about a 64 percent improvement rate. And I really have no reason to believe that anyone's data can't do the same thing for their organization as they try to learn how to deliver better services. So that's one lesson. Government can get better. I think there are, there may be other lessons I could tell you, but why don't I shift and talk about a story to illustrate some lessons? I'll pick just one and I'll tell you one about child care since there are many families out there where moms and dads are going to school and going to work and they want their children to be in safe environments — and they'd also like them to learn something while they're there.

State governments can use Temporary Assistance for Needy Families funds, which is federal cash assistance in the federal welfare program. So, the good news is that money can go to child care subsidies, to help offset the cost that moms and dads have to pay for. The research around child care tells us that kids who receive child care in higher quality settings are more likely to be ready for kindergarten. So, the goal here then should be, most obviously, that we want more kids in high-quality providers — so that there's more high-quality providers and more kids going to them.

Gualtieri: Totally makes sense. So, tell me what did the data tell you?

Wavelet: Well a few things. The first was that we didn't have enough high-quality child care. This is not uncommon to Wisconsin or Colorado. The second thing we learned was that in some places we didn't have enough licensed care — forget about quality— just meeting minimum safety compliance expectations. And three, in some places we didn't have any care at all.

So, we started referring to those places as child care deserts. Some of you may have heard of food deserts —  similar concept. The data discovery then was confirmed, and we talked to families and our stakeholders in those locations throughout Colorado — indeed there was no care at all.

Gualtieri: All right, so I got it. So, you knew what you wanted and then you looked at the data to see what it would tell you, and then what did you guys decide to do?

Wavelet: Well, this is where is gets fun. You could decide to increase the number of providers or licensed providers or high-quality licensed providers, or all three. And, in fact, we went after getting more licensed and preferably high-quality providers developed in these areas. So, it was a “wildly important goal” for several years. If any of you are fans of the four disciplines of execution, that's where that concept comes from. And while it was a goal, we tried a lot of different things, not all of them worked, but the data was our guide for understanding whether or not we were getting more kids in high-quality care.

From 2014 to 2018, the number of kids in high-quality care nearly doubled. We're grateful at the time because we didn't do this alone from the state government; we did this in partnership with our 64 counties in Colorado, who administered the child care subsidy and then the early childhood counsels who were responsible for developing more providers and high-quality providers at that. So think of it as both the supply side and the demand side of the market.

Gualtieri: Any other lessons you could share with us?

Wavelet: Sure. Obviously making a difference in our clients or our customers lives is why most of us get up in the morning. It's the most satisfying part of this work, but I also witnessed some organizational benefits to doing this. And there are at least three dynamics that I was part of and saw taking place  in the organization. And the first is better data. So, if you use your data, it will get your attention, it will get your partners attention, and it can often result in better-quality data, its reliability and its accuracy. Now, this doesn't happen naturally, it has to take work, but when you pay attention to it, it means that there's more interest in it being accurate.

Which leads me to the second dynamic that I witnessed, which is increased accountability. If you get people in a room, or on the phone, or on video and talk about whether progress was made towards the goal or target each month, or why the progress wasn't made, that naturally creates a sense of ownership and urgency and even a sense of pride among staff.

And then the third dynamic is a natural culture committed to continuous learning and improvement can result. When an organization across all of its staff levels wants to do a better job, that means that everyone in some way is willing to learn. And to get better inevitably requires change, and lots of trying, and some failing, and more trying and an ongoing willingness to ask the curious and usually hard-to-answer questions. But, if you follow the data, it can help you figure out what you know and what you don't and can help inform what to do next.

Gualtieri: All right. So, you're now here at MDRC and you're working on the TANF Data Collaborative. The TDC. It's an effort that's going to assist states and territories to use their TANF and administrative data to make more informed decisions about how to serve their clients.

Wavelet: Yes, all true. I'm really energized by this opportunity, the TDC, which clearly builds on my work in state government and gives me the chance to still work on improving government — just from the outside. And this project brings together some of the smartest thinkers on this topic. Our partners are Chapin Hall at the University of Chicago, the CUSP Institute at New York University, and Actionable Intelligence for Social Policy at the University of Pennsylvania — so several very high-powered schools. And it's funded by our fed partners at the Administration for Children and Families so that any state or territory across the country can benefit.

And the last shameless plug or shout-out I'll give is to our brand-new website: www.tanfdata.org is a great place to learn more about this specific effort.

Gualtieri: So I am imagining that each state TANF agency must be so different. How do you figure out what they all need?

Wavelet: That is very true. If you think of how different each of the 50-plus states and territories are, you can imagine that although they follow the same federal rules there are different populations, different employers, schools, and colleges, and different levels of poverty. This mean that any TANF data shop’s capacity to use its own data and analyze it and translate it also varies. So to understand the needs across this kind of variety, our partners at Chapin Hall at the University of Chicago conducted a needs assessment survey of all states and territories.

Gualtieri: Ah, we love a good survey.

Tell me, what did the survey tell you about the challenges that most agencies are facing when they try and harness all that data?

Wavelet: According to the survey respondents, the top four barriers identified to conducting data analysis are, number one, staff time — not enough of it.

Number two, availability of technologies and tools. The third is staff skills for technology and analytics. And the fourth is availability of financial resources.

Speaking from my professional experience, I agree with all of those answers. We didn't uncover something surprising in the survey on what states and territories need. What is surprising though, perhaps, is if you'd notice what is missing from the list is a lack of data. So there isn't a barrier that there isn't enough data. Now it might not be the right data, it might not be organized in the right way or complete, but there is oodles of it. Yes, I did say oodles, and that is a technical term I learned here at MDRC.

Gualtieri: So, you talked a little bit about the TDC and MDRC and our partners. Hearing all of these challenges, what insights do we bring?

Wavelet: This is really one of the best parts of this initiative. I mentioned the partners before and they are all really good at what they do, including MDRC. But let me just make some of these connections more concrete.

Let's say you work in a TANF shop, pick any state that you like. Imagine that you're in that TANF shop and you're working on how to integrate the TANF data with the state's employment data. Our partner at the University of Pennsylvania, Actionable Intelligence for Social Policy, brings real-world experience working in many places, many jurisdictions to get the structure and the processes for sharing data while protecting its privacy and confidentiality — so that the TANF staff have access to employment data and TANF data.

A second example is: once those data are integrated — even just looking at your TANF data — you probably realize it's not ready for analysis. What I mean is, the raw data is like having the raw ingredients in your kitchen cupboard, but you may need a recipe to make the dish. Similarly, our Chapin Hall partner has developed a TANF data model — think of that as the recipe that the states can use to organize their data so that it's ready for analysis.

Okay, so third. Because having the data isn't enough — it is a necessary step but it's insufficient for learning — you only learn from it if you're able to analyze it and then apply it to decision. And that brings me to what MDRC can offer, which is 40-plus years of experience doing analysis of these large, complex data sets and figuring out how to use the data to inform decisions that improve services.

Gualtieri: Thanks to Melissa for joining me today. To learn more about the TANF Data Collaborative, visit tanfdata.org.

Did you enjoy this episode? Please subscribe to the Evidence First podcast for more.

About Evidence First

Policymakers talk about solutions, but which ones really work? MDRC’s Evidence First podcast features experts—program administrators, policymakers, and researchers—talking about the best evidence available on education and social programs that serve people with low incomes.

About Leigh Parise

Leigh PariseEvidence First host Leigh Parise plays a lead role in MDRC’s education-focused program-development efforts and conducts mixed-methods education research. More