Friday, June 26, 2009

Annual Giving Ts

I had written a response to Stephanie this morning (written as I most often do in between other tasks on a notepad with a pencil) and then had an discussion shortly after about fundraising (thank you Craig and Elizabeth) that made me rethink what wrote so the following is my "new" post for today.

Stephanie's point is perfect for this time of year. I am a "planner" so have an annual retreat with my staff in March to review the last 12 months and schedule the next fiscal year. While I do this early and strongly believe that you are better served to do so next year, that doesn't mean that you should not do it now - IMHO planning of any sort is better than no planning at all and the net effect will most likely be to demonstrate how much easier it would be if all of the planning was done earlier resulting in a March meeting next spring.

In or out of the digital fundraising world, this is the single most important piece of annual giving advice that I can dispense: Plan, Plan, (test) Plan. Without a plan, you are simply going nowhere and doing it quickly. With budgets getting tighter and donors more frugal, we need to use our resources more wisely. For most of us this means testing, targeting and thinking - the three "t"s of annual giving. Data mining, data analysis and segmentation will all help to yield more for less.

Generating online engagement takes time and investment to produce results. Consider how long you have been sending mail and how many millions of phone calls your program has made, then consider what you have done online and for how long. Now look at the results from each of the three programs - is what you put in what you are getting out? Creating an online plan and integrating it with the rest of your programs also serves other components of the advancement shop - the alumni office is most likely under pressure to reduce or eliminate printed invites and integration here can play a big role in their success as well as your own.




4 Comments:

At July 2, 2009 at 4:39 PM , Blogger Unknown said...

This comment has been removed by the author.

 
At July 2, 2009 at 4:41 PM , Blogger Unknown said...

This comment has been removed by the author.

 
At July 2, 2009 at 4:43 PM , Blogger Unknown said...

Hi Scott,

I was reading about your recommendation of testing. I think it would be helpful to share more specific examples of testing with e-solicitations.

Some examples of the testing we do. In e-solicitations, we try to evaluate which message content resonates best to our different constituent types. Since we tend to send several subsequent e-mails during a campaign, each e-mail will have a different message. We evaluate which message received the most click/open rate and then think about how we can expand on the most successful messages in the future. We also test other items such as subject lines, and imagery.

Can you think of a more strategic way to test the messaging of an e-mail series. Also, please share if you think there are other attributes we should be honing in on.

Thank you!

 
At July 2, 2009 at 9:01 PM , Blogger Scott VanDeusen said...

There are many features in email fundraising that you can test: time of day, subject line, sender, direct ask, type of solicitation (text, html, flash..) format (letter style with a signer through blog style posting...), just to name a few.

There are several key components to any good test - a control, a test variable and holding all other aspects constant. The way that you are testing does give you some indication of what works but does not give you a really valid comparison between the two messages. In order to do this more effectively, what you need to do is split your audience into two random but equal segments each with a different test variable (message, subject line, sender, etc...) send both at the same time and then keeping all other variables equal. You can continue to refine this over and over using the variable with the higher response rate as one of the two options. Once you establish the most effective variable, you can then limit the test portion to 10% of the population and use comparisons of the % response rather than absolute numbers so that you don't reduce the effectiveness of 50% of the population.

In an ideal world (and this is where we should all be working toward) this test would be based upon 10% of the total population several days before the solicitation is due to go out and then the better results would be applied to the other 90% of the population with the more effective variable included - that help?

So far as the focus points, look at what the first decision points are made on - sender, subject and time. I would start with those and work down toward the more complicated components.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home