Testing in fundraising is not for the faint-hearted
It’s not fun being a fundraiser nowadays: depressing trends like declining responses, high-cost-acquisition in combination with through-the-roof-attrition, rock-bottom-retention and charity-bashing-media… pfff, mission impossible?!
Or, is there still a bright light in the fundraising sky? Sure there is, plenty!
We just have to continue to improve ourselves. Watch out that you are not being sucked into the motionless status quo. We will fight attrition, increase response numbers and retention rates, build pure and genuine supporter relationships by honest storytelling and true engagement and raise all the money we need to make this world a better place!
It’s probably not so simple either… and therefore, as fundraisers, we test.
We test, test, test and test again. We test the color of the envelope, we test the font, we test the stamp, the salutation, the underlining key copy messages, the pictures, the opening copy, the closing copy, the sender, the signature, the incentive, the color of the incentive, we test everything!
Some time ago, Julie Verhaar commented on one of my blog posts about the best ingredients for a successful fundraising program:
“As you asked, one thing that might be added is, test, test and test again. Never assume you know if something is going to work, so test and make it better. This is one of the great advantages of fundraising we can test so many exciting things and get a clear result. So pick the best and roll it out!”
I don’t know why I didn’t add testing in the first place, because Julie is right. So, to make up for it, I want to share a thought with you…
What I find striking, is that all of the above mentioned tests take about 1or 2 months before you have your results. It’s relatively easy if you know what you’re doing. But, can you think of a test that is taking place over a period of 1 year, or more? If set up well, they can have much more effect than the statistically significant 0.03% increase of your acquisition pack.
I’m not saying you shouldn’t do short term testing, because it can and will improve your results, but I’m thinking long term testing will have a much bigger impact.
Let’s face it, most short term testing aims at donor acquisition. While most of our issues could be solved if we know how we keep our donors. Testing in long term donor communication seems therefore much more interesting, but is not done on such a wide scale. Why not? Is it too difficult? Are there too many outside intrusions to set up a clean test? Or are we actually not looking further than our own horizon?
Think about it. What if you create two groups of newly recruited direct debit supporters. Group A gets the current supporter communication, which consists of, let’s say, 4 offline newsletters spread throughout the year and a monthly e-news update. Group B gets a totally different set of contact moments: 6 offline newsletters, 2 extra appeals for an extra donation, a weekly action e-mail in which they are asked to participate in some sort of petition or survey and they’ll receive a thank you call just before the annual December campaign. The point is: two completely different approaches.
It is indeed much more difficult to set up. Think about the statistically significant supporter numbers you need to execute such a test and the segments within that group, and the extra communication materials you have to create… But that’s maybe not even the biggest trouble. It’s the strategic choices you make before. What do you want to improve and how will testing help you?
And yes, I know that the common understanding among fundraisers is that future supporter behavior is pre-dominantly determined when the supporter signs-up or donates for the first time. We all know that very young supporter profiles will most likely get supporters to cancel or lapse. That is probably (and hopefully) the reason why we focus on testing in acquisition so much.
But what if we can make an incredible impression in the first year? What if we can counter those attrition rates by also engaging our supporters? Take them on a journey that turns them into a life-time ambassador! To make that happen you need to be brave enough to test your supporter communication on a larger and long-term scale.
Again, I’m not saying short and long term testing are mutually exclusive! I’d do them both. And I’d still go after the same profile or your most successful supporters.
But don’t just try to recruit those supporters, also retain them.
What do you think?
14 Comments
Rene Bekkers · July 18, 2011 at 15:51
There is a wealth of scientific research available that you can use to design tests of fundraising materials. You can find an overview of the research at http://www.understandingphilanthropy.com.
reinier · July 18, 2011 at 19:16
Hi Rene, that looks very interesting! Can I persuade you to do a (couple of) blog post(s) on your work for our 101fundraising readers? I’ll send you an email with an offer you can’t refuse…
Walter van Kaam · July 18, 2011 at 15:54
Hi Reinier,
I couldn’t agree more! I know for a fact that the real ‘program changes’ are the result of long term testing, not the small short term improvements. But it takes a lot of courage indeed. Because what happens if results are not ‘clear-cut’ after a year or so?
There is a part where I don’t agree with you, although I’m pretty sure it’s not really what you meant, but just the result of how you wrote the post. I don’t believe it’s possible to seperate acquisition from retention. In my opinion, retention is the direct result of an acquisition program. If an organization is really capable of engaging donors, instead of pursuating them to become a donor, it’s much easier to keep them aboard. So in my humble opinion: ‘Keep focusing on testing in acquisition, but make the evaluations in the long run!’
Walter
reinier · July 18, 2011 at 19:12
I think we’re on the same page Walter :-) Thanks for your comment!
Walter van Kaam · July 19, 2011 at 01:22
You’re welcome!
Daryl Upsall · July 19, 2011 at 10:31
Hi Reinier.
I cannot agree more with your comments, testing and analyising over the long term is a lost skill and discipline in fundraising. 25 years ago we tested everything in DM for example, 20 years ago everything in telephone fundraisiing. Now younger fundraisers change something and call it a test. I ask at conferences if anyone knows what a “control group” is in fundraising testing. I get blank faces.
We have to change this mentality and include controlled testing in F2F, SMS, digital etc.
Ciao from the beach!
Daryl
Owen Watkins · July 19, 2011 at 10:59
Hi Reinier,
Echoing Daryl’s comments above, there is a huge amount of “trying” that currently goes on in fundraising and not so much testing. Squeezing percentage point gains from DM packs was (and is) a combination of art and science, yet on monthly giving programs often we do not apply the same disciplines.
I don’t know whether the greater long-term ROI associated with many monthly giving programs has made fundraisers complacent, or whether the stories told at the fireside by the old chargers are not as interesting as they used to be? Either way, it’s donor money we’re wasting if we are not disciplined with our testing, and I would struggle to justify to donors why we are not using their money as efficiently as possible.
Cheers,
Owen
PS This is the control, some other Fundraising Blog has received the test….
Sarah Clifton · July 19, 2011 at 16:02
Reinier,
Thanks for this post. I’m a big fan of “cohort” testing as well. I’m working right now on ideas to test the impact of various thank you and welcome strategies as well as the long-term impact of a donor visit on retention. I look forward to sharing the methodology and results with readers!
David Cravinho · July 19, 2011 at 16:14
Hi Reinier,
I think that your description of a possible approach to a long-term donor communication test pinpoints some of the problems involved.
In the example above, the number of variables between the communications received by the two groups would make it really complicated to get an accurate interpretation of the results. Sure, you could compare the performance of the two groups over the course of the year and relate ongoing retention to trigger points in the communication schedule. But, even so, it becomes confusing if any gains made in Group B by a positive impact of the 2 appeals were more than offset by a negative impact from the weekly action e-mails. Or, if the effect of a less frequent frequent newsletter in Group A is counteracted by the monthly e-news update.
Of course, I agree fully with your point regarding the importance of testing and its role in helping us improve the long-term value of donors. And as we move towards a situation where more and more chrities will rely on regular donors for an increasing chunk of their income, this is only going to become even more vital. It’s just that the results you get can be a lot more complex than for a ‘simple’ font test on a DM mailing, which could discourage many fundraisers from embarking on a confusing journey into the unknown…
Cheers,
David
Reinier Spruit · July 19, 2011 at 17:41
@Daryl: thanks for your comment from the beach! I’ll make sure to check your next conference appearance to follow up on your intentions to spread the word on testing! ;-)
@Owen: thank you for your perspective. I agree with you that we have a responsibility towards our supporters to constantly improve our work and efficiency. I personaly tend to think it’s just slow evolution, that it’s not being done on such a wide scale. But if we all keep reminding each other about it, eventually it will happen…
@Sarah: thanks for your offer! Do I hear a new blog post coming up when the results are in?
@David: thank you for your valuable contribution. It’s indeed a tricky busines and such test needs to be set up very carefully, because of the many external influencers… When embarking on this journey I would definitly consult an expert or two to be sure that the results you’re getting are understandable and useful. There is nothing more frustrating than non-decisive results in testing…
Denise Beecroft · July 20, 2011 at 06:54
We have a relatively small database – around 4,200. I know this is a basic 101 question, but would you say if you did a split test of that number it’s still worth it?
regards
Denise
Reinier Spruit · July 20, 2011 at 08:54
Hi Denise,
Glad you asked. The split test could be worth it, but depends very much on other variables, like the expected response, the desired certainty and the accepted deviation…
Check out this one for more information and an online calculator:
http://www.surveysystem.com/sscalc.htm
Good luck and let me know if it works for you!
Reinier Spruit · July 22, 2011 at 12:20
The Agitator has picked up this blog post:
“But as Roger and I have been harping upon, retention rates suck these days. So I agree with Reinier and his basic point — we need to be thinking about more radical surgery for our donor retention programs.”
Check out: http://www.theagitator.net/communications/the-long-term-test/
Sandra Lippman · August 14, 2011 at 17:53
Appreciate the focus of the test / survey / donor retention post and link to sample size calculator online.
Thank you, Reiner.