Why Do Marketers Test?

Jim Novo recently commented that few online marketers deploy testing the way it’s often done in the offline world. Jim speculates that the reasons for this include cultural issues and a lack of ideas about what meaningful tests to conduct. For me, this raised a more fundamental question:

Why do marketers test in the first place?”

You could argue that the answer to that is simple — to increase the effectiveness of marketing programs. But I think there’s another side of the coin: To improve the efficiency of marketing programs.

In the “old” world of direct marketing, where direct mail costs are significant, marketers test to determine who not to mail to. But in the online world, where the incremental cost of sending out one more email is practically non-existent, suppressing marketing messages is less of an issue.

So why should online marketers bother to test? If online marketing response rates are higher than direct mail response rates, and little opportunity to reduce campaign costs, then marketers will have little incentive to test.

The distinction — and balance — between effectiveness and efficiency is subtle. But marketers who recognize that their testing approaches have been more focused on efficiency than effectiveness will realize that they’re missing many opportunities to create and execute a strategic test and learn agenda that not improves effectiveness but drives marketing strategy.

The notion of a test and learn agenda isn’t new. But many database marketers’ agenda is undeveloped, underdeveloped, or misguided. Often, testing plans are focused on short-term and tactically-oriented questions like who should be mailed to, and which messages work better than others.

There’s a bigger opportunity here, specifically, to test to help answer more strategic marketing questions like:

  • What number of touches is best for which customer segments?
  • How can a sequence of messages help lift response?
  • How does time between touches effect response and conversion rates?

The opportunity to be more strategic with testing isn’t limited to marketing effectiveness. In many firms, it falls on the shoulders of market research to answer questions about consumer behaviors and attitudes. But far too often, market research is burdened with addressing tactical issues. Database marketers can step in and help here, and devise tests to help understand:

  • Which customer behaviors are most closely correlated with response and conversion and help define an “engaged” customer?
  • What the optimal spend per customer to increase customer engagement?
  • What is the profit per customer impact of increased customer engagement?

Database marketers — or online marketers, for that matter — won’t be able to answer these questions if their testing approaches are limited to figuring out who not to touch. Or if they don’t test at all, for that matter. Marketing needs a new mindset about testing, and break out of the confines of campaign-centric ROI test and measurement.

Technorati Tags: , ,

8 thoughts on “Why Do Marketers Test?

  1. Great Thought, Ron. I second your thought.

    There is lot what data can say and finding whom to mail and whom not to is only a small part of that horizon. There are many analysis that can be done like the efficiency analysis, sensitivity analysis, trend analysis, behavioral analysis etc. Response Modeling says something but that is completely tactical, using it for strategic moves may fire back.

    I wrote on how data analytics help Marketers and how they should look at the business problems. You may want to look at it. It is available at:

    http://analyticsbhups.blogspot.com/2007/06/marketing-is-not-art.html

    — Bhupendra

  2. Ron, good examination of testing. Thanks.
    Here are a couple of additional testing questions to consider when designing a test strategy that go beyond deciding who NOT to market to:
    1. What are the best channels to use to reach an individual (customer or prospect)? How does this person prefer to hear from me and/or talk to me?
    2. Should I consider modifying my offer/message based on customer segment? Sometimes even minor creative modifications that resonate with a specific customer group increase response/sales significantly. Too often, I see one generic message pounded out to everyone.

  3. Suzanne — Thanks for the comment and the additional questions. Couple of comments in reply to each question:

    Q1: I find channel preference to be a tricky thing. My belief is that with many people, it’s a matter of convenience and timing. Example: After a week’s vacation or a few days of business travel, I’m deluged with unread email. Although I might “prefer” online communications, a marketing message is a lot less likely to be read.

    Q2: You make a great point, but the reverse is often true, as well: Marketers tweak the message for different segments, to no effect. Why? Because the differences between segments weren’t very pronounced.

    What I’m hoping for is that more marketers will use testing and ask questions like the ones we’ve raised to develop better “theories of the customer” that would drive more effective segmentation, and start addressing the more strategic questions that often go unanswered and unaddressed.

  4. Nice piece Ron.

    Onliners often make the critical mistake of evaluating success based on campaigns rather than customers. If some of the folks doing e-mail would just use a control group once in a while, they would discover e-mail in fact does have very serious costs:

    1. The cost of discounting to commerce customers who would have bought anyway. This cost is in the billions of dollars annually.

    2. The cost of creating unresponsive customers through list fatique

    3. The cost of undeliverability due to poor reputation management. There is a reason customers click the “This is SPAM” button in reaction to your e-mail, and if they do it on a regular basis, perhaps it is time to rethink the timing and content of the program?

    All of this talk about being customer centric yet most marketing folks still measure their success based on campaign metrics (response, sales) not customer metrics (lift, incremental profit).

    What’s a mother to do?

  5. You can’t PROVE that email creates unresponsive customers through list fatigue. How dare you mention that on this blog. 🙂 😦

    The campaign vs. customer metric issues KILLS me. There are people in a growing number of firms who are willing to spend hundreds of thousands of dollars to measure their firm’s Net Promoter Score, and then turn around and criticize marketing because the ROI on the last campaign was below par.

  6. Pingback: One (Customer) Number » Marketing Productivity Blog » Blog Archive

  7. Interesting entry, with some good points.

    I’d second everything Jim Novo commented on and add two more.

    The first is opportunity cost. Assuming you believe that it’s not sensible to blast everything to everyone all the time, there’s a opportunity cost in sending people something other than the best offer for them.

    Secondly, and more importantly, are negative effects. Jim mentioned several of these, but there are even worse ones. Some contact actually reduces sales (provably!). Some contact actually drives customers away (provably!). See http://scientificmarketer.com/2007/02/triggered-comparison-shopping.html, http://scientificmarketer.com/2007/03/demand-suppression.html and http://stochasticsolutions.com/retention.html for examples.

    Finally, you say in response to Jim “You can’t PROVE that email creates unresponsive customers through list fatigue. How dare you mention that on this blog.” But you absolutely can: all you need is a control group.

    Keep it up!

  8. Nick — Thanks for your comments and the links. One note: One of the things that is hard to pick up when reading stuff online is an inside joke or comment. My comment to Jim about “how dare you mention that….” was just that. Jim and I (and Adelino for that matter) are consistently amazed (and frustrated, perhaps) at how few marketers realize that with some (generally easy to construct) tests that they can prove or disprove certain hypotheses about behavior and impact. Sorry for the inside comment. — Ron

Comments are closed.