By DUANE M. ELLING
Evaluation activities traditionally have been used by the country’s nonprofit sector to show that resources are being used wisely. But they also can be powerful tools for helping practitioners, funders and policymakers better understand the impact of specific strategies on social concerns. That knowledge can spark innovative thinking about new approaches to effectively and efficiently addressing the nation’s most pressing problems.
Nadya K. Shmavonian is president of Public/Private Ventures, a nonprofit social research and policy organization based in Philadelphia, and a long time Mott Foundation grantee.
In this Mott Conversation, Shmavonian discusses the critical role of evaluation in designing, testing and sharing high-quality solutions that help low-income communities create meaningful and lasting change.
[Editor's note: This is an edited transcript of a video conversation with Nadya K. Shmavonian, president of Public/Private Ventures. Click to view the video.]
______________________________________________________Mott: How can the nonprofit sector best approach evaluation activities to ensure that they lead to genuine learning and innovation? Shmavonian (NS):
Evaluation is not a thing so much as it’s a way of thinking. Frankly, I believe if you are not asking what you want to learn from a given intervention or program at the beginning, then it isn't necessarily structured as strategically or thoughtfully as it could be. And if it’s not a shared value with the grantees who understand - who are really trying to advance their own knowledge and best practices - then I think the attention, use and consumption of evaluative information can be somewhat limited, and there is not the imagination for the next time.
Nadya K. Shmavonian
In a place where you have continuity of programs, where you know that you’re going to be in it long term, where you’re field-building and advancing, there’s a sense of history and of how you can work evaluation in at the front end, but also at the back end. I think there are too many situations where foundations and their grantees have not reaped the collective benefit at the back end to reinforce that.
But, certainly, if you don’t do evaluation, then there are enormous wasted resources. You don’t ever systematically build an evidence base. What I also think is quite important is that in a good evaluation you can pull apart the component elements - and the data will surprise you in many instances - and then you can ask the next generation questions about, “Okay, so why did that work?” and think about the next intervention to advance that knowledge.
Particularly in these times, which are so resource-constrained, we should be focusing on these issues and getting smarter about what it means to evaluate, grow and potentially even replicate programs. But I worry that in the press for resources, we are actually limiting our targeting of thoughtful evaluative inquiry. Mott: What are some current trends in research and replication of programs designed to help low-income and marginalized people? NS:
I think at this moment in time we have a real opportunity to advance replication science and to learn from data through good evaluative thinking. There is a strong focus, discussion and discourse currently, both in the academic environment but also certainly in the federal government, on how we can identify programs that can move up the continuum of evidence, from promising evidence to moderate, strong and then actual demonstrated impact.
The concern is that, yes, ideally everyone can move through that innovation cycle and eventually there will be some interventions where the full impact can be measured and studied. But there is a whole continuum of organizations that are doing good work, that have good data systems, that are doing some innovation around their own evaluative thinking, that also may get missed in this. And there may be a drive to a one-size-fits-all approach.
We want to really look creatively at how we can not only contribute to the full “gold standard” of evaluative design, but also at how you identify the measures of organizations that are successfully delivering their programs, targeting to key populations and how we can work with them to make their programs more effective and how we can inform refinement to strategies based on implementation, knowledge and actual program experience. Mott: In describing P/PV’s approach to evaluation, you’ve noted the “importance of combining the power of scientific inquiry with the nuance of art.” Could you elaborate on that point? NS:
A good evaluation professional working within a foundation or in partnership – we at P/PV are external evaluators – is going to stay rigorous and truthful about the data and research. You can never compromise that and we make sure that we have done the hard work and the right intervention to provide some real valuable results with integrity. And we have to be willing to share those findings, no matter what they show, even if there’s unfortunate failure of a given part of a program or an entire investment.
But how you make that data “live” and how you make it accessible to people who can then work with it is a really important challenge. Some people think evaluation is like a report card, as opposed to ... approaching it as the only way you are going to succeed. You always have your eye on the prize, in terms of what you are trying to achieve and what you need to learn to do it best. You’re always thinking, “What are the next generation questions that are going to help us refine this or get even smarter about a particular intervention?”
And the patience needed to do that. The evaluators, which may often be a team of people, can help to advance and create. And that to me is the dance.
I think the foundations bring a tremendous hunger to learn and to structure where appropriate, all the way up to randomized control trials, but they also understand there are places where you just feel and touch, and you work with people who can help think about the next place to advance it.