I ran a session this month with The Philanthropy Workshop, the flagship donor education programme invented by Rockefeller Philanthropy Advisors and run in the UK by the Institute of Philanthropy. It’s one of numerous activities by advisors, private banks and universities to make donors better.
Do they work? There are two major problems to finding out.
First, the quality of a donor’s decisions are normally invisible to the donor themself. For instance, if they choose a programme which supports one child with the $100 they give whereas an alternative programme could have supported 25 children for that money, the ‘missed opportunity’ felt by the 24 children isn’t felt by the donor. [These are real numbers, from this work to increase school attendance in India.] The donor gets the same warm glow and cheery photos either way. Hence bad decisions get made all the time, and since the donor doesn’t even know they’ve made a bad decision, there’s no mechanism for them to learn and improve. This itself is a giant topic.
The other problem is defining what a good donor is. Donors vary widely in their goals (improve education, provide bereavement care, reduce deforestation, to name a few), so comparing the end results they enable would be tricky indeed.
Now, we can take a leaf from the book of J-PAL, the research institute at MIT which studies the effectiveness of various approaches to alleviating extreme poverty. J-PAL doesn’t look at the mega-questions – whether aid breeds dependency or spawns poor governance, for example, which are rather ideological. Rather, it looks at discrete answerable questions about practical matters: how can we get these children immunised, what would these children to come to school, what if anything should we charge for anti-malarial bednets in Kenya?
By analogy, we could define some specific characteristics of good giving, and measure whether they’re affected by donor education. One such would be making unrestricted gifts: because almost invariably, money achieves more if it is given without restrictions.
So how can we measure whether donor education increases propensity to make unrestricted gifts? Here is one bad way. Just monitor the proportion of attendees’ gifts which were unrestricted before and after the course. This is bad because it wouldn’t indicate whether any observed change was due to the course: donors might have been influenced by, say, media coverage about giving. This method would leave us with no idea about why any change occurred. Another bad way is to compare the proportion of unrestricted gifts made by people who’ve done the course with the proportion made by people who haven’t. This is no good because it’s not hard to imagine that the kind of donors who elect to do a course are different in some meaningful ways from donors who don’t. That is, we’ve have a case of selection bias, which again gives us no idea about whether any observed change was attributable to the course.
A rather better way is a kind of randomised control trial – the ‘gold standard’ rigorous tests used for drug trials. It’s not complicated. Here’s what we’d do.
First, we’d talk to all the people who apply to come on a donor education course and ask them about the proportion of their grants in the last, say, year which had been unrestricted. Second, we’d randomly select from those applicant donors the set who would do the course. Third, after the course (or maybe a year after the course had finished), we’d again survey both sets about the proportion of their grants which had been unrestricted.
Voila. We’ve got a control group (the set who didn’t do the course) so can see what changes would (probably) have occurred anyway in the group which did the course; and we’ve got rid of the selection bias problem by choosing our course group at random from people all of whom had applied to be on the course.
Has this ever been done? I’d be fascinated.
When we hosted our first donor education seminar, we performed a simple before and after survey, using clickers, to see how much knowledge our audience brought with them to the seminar and how much they learned because of the seminar. We posted our results here: http://www.impact.upenn.edu/images/uploads/CHIPSeminar2010_FinalResultsLessons.pdf.