Sharing What We’ve Learned Evaluation Spending

“We are conducting a scan of foundation practices so that we can inform our own efforts about… Would you be willing to talk with us about your foundation practices?” Fill in the blank. It might be about foundation strategy development, due diligence practices, grant monitoring, grantee relationships, board materials, evaluation practices, organizational learning approaches, and the list goes on. Several times each week, I receive this kind of request from other foundation colleagues or from the consultants they hire. I have also made these calls myself and commissioned many such scans from consultants. The results of these efforts can be useful and informative. They can give us new ideas and useful benchmarks.

The problem is that these scans are rarely shared. There are lots of reasons given for why—“it was just a quick scan,” “it was just for internal purposes,” “It would take too much time to verify all of the information and we just wanted to get a directional sense of the field.” And so forth. All of these are real reasons. I’ve even used a few of them myself over the years. But I’ve come to think that it is a bad habit we have developed in the foundation world and that we all lose out because of this bad habit. We lose the ability for accumulated knowledge, for benchmarking practices, and for catalyzing dialogue about how foundations work and why.

I am trying to break the habit. I am going to try to share the information I gather in the scans I conduct or commission at the Hewlett Foundation, beginning with this brief scan we conducted to benchmark spending on evaluation. Last year, our Board asked how much should we be spending on evaluation. It was a reasonable question. As I was preparing to answer the question, I wanted to draw on the latest benchmarking data for evaluation spending. Only there was none. The last published spending benchmark was several years old, published by the Evaluation Roundtable in 2010 using data from 2009. And the evaluation world was changing rapidly. Many more foundations were building evaluation functions and I wanted more recent data.

So I conducted my own brief scan, contacting colleagues who lead strong evaluation functions and asked them about their spending levels. I incorporated those benchmarks as points of comparison and folded it into additional analysis that we conducted with the Hewlett Foundation’s own data, and prepared a memo to answer our Board’s question. As part of our November 2013 board meeting, we had a discussion about how much we should be spending on evaluation, and the Board endorsed our recommendations.

I am sharing a distillation of this memo. The memo is not perfect. The scan is not comprehensive. My colleagues offered all sorts of caveats about the information they provided. But it was useful for us. And I share it now in case it is useful for others.

View Resource

About the author(s)

Director of the Effective Philanthropy Group
William and Flora Hewlett Foundation