Now I’m worried – who decides what is effective and who should be funded?
I went back to www.GivingMarketPlaces.org today to see if they had answered my question about the data used in the report I mentioned a few days ago The Nonprofit Marketplace: Bridging the Information Gap in Philanthropy. In the way that the web works, I found myself on the TacticalPhilanthropy Blog which mentioned that the William and Flora Hewlett Foundation, which partnered with McKinsey & Co to produce this report had funded an organization called GiveWell.
From what they say on their web site, GiveWell was founded and is staffed by some former hedge fund managers. They have set themselves up as an “independent evaluator” to do “detailed analysis” of nonprofit organizations and then to recommend to donors whether to give to those organizations or not.
GiveWell says they reviewed 136 nonprofits and only 4 came highly recommended! I was absolutely amazed by the list of organizations that were “Not Recommended” for donor giving including the American Red Cross, UNICEF, Technoserve, and the Girl Scout Council of Greater New York.
It appears that the primary reason most organizations were “Not Recommended” was because they didn’t give GIveWell the right kind of information. I doesn’t surprise me that such an internationally respected organization such as UNICEF might not want to spend hundreds of hours gathering large amounts of data to provide to two recent college graduates with undergraduate degrees in social studies and religion (the staff of GiveWell).
I’m just amazed at the hubris of GiveWell, an unknown self-described nonprofit outcome evaluator, to believe that they have the credibility to issue the ratings they have and, by doing so, to tell donors not to give to these organizations. If this is the quality of analysis that we can expect from the type of “intermediary” organization being promoted by the Hewlett study, then it is hopeless to expect any real conversation and sector wide learning about what works and what doesn’t.
I was very interested to see your blog posting about GiveWell as it has concerned me for some time to see the growing need to “evaluate and rate” nonprofit effectiveness. CharityNavigator is another example. I understand the impetus but find the application a bit like applying a for profit business template to an undertaking that does not measure outcomes in the same quantitative way. I don’t have a problem with transparency (Guidestar) but the rating piece seems flawed. Have you seen others getting into this too?
Liz, Thanks for your comment.
These attempts to develop nonprofit effectiveness measuring systems are a well-motivated response to go beyond the limitations of a finance only ranking like CharityNavigator.
As the report I mentioned indicates, there is a movement to create these intermediaries. While I welcome an ongoing conversation about how effective any particular nonprofit is, I’m extremely worried about the CharityNavigator equivalents for rating societal outcomes. Who gets to be the “evaluator” and on what basis will they be making those decisions? What time frame are they using? What about the web of nonprofits that contribute to a particular outcome — how does this synergy get measured?
My challenge to these intermediaries is to start by trying to create the indicators for the hardest to measure causes like the arts, the humanities, all prevention related concerns, advocacy organizations, etc. As these organizations and their supporters, as well as the best minds in academia and government, have been struggling for decades to identify the right measures, I’m absolutely dumbfounded that a foundation as respected as the William and Flora Hewlett Foundation would endorse by dumping tons of cash on an organization like GiveWell which, to me, demonstrated a superficial and arbitrary evaluation in their recent rankings.