The Best Thing About Social Media Marketing ROI

Although I’m skeptical that social media is having a “revolutionary” impact on marketing, I believe that it can improve marketing effectiveness and efficiency. There’s a lot of good stuff packaged up in this thing we call social media.

But I didn’t realize what the best thing about social media was until I read a recent post in the Social Media Examiner. In an article about social media return on investment, the author wrote:

“The peculiar feature of the social media return is that you can define it to be essentially anything you want it to be!”

And there you have it. The best thing about social media marketing: We get to make it up as we go along and change the rules to whatever we want them to be!

Seriously, it’s getting a little tiresome reading these cockamamie ideas from social media experts about how to measure return on social media investments.

ROI is a metric. It’s one of an infinite number of metrics that you could dream up in order to measure what’s going on in the world of social media.

Roughly speaking, there are three types of metrics: 1) Input; 2) Output; and 3) Impact. (There are some interesting discussions about this typology as it applies to climate control and naval research, but not so much to marketing).

Input metrics capture how much of something you put in the investment. It could be things like hours per week, dollars spent per customer, raw materials used by item.

Output metrics capture what you get out from that input. Units produced per week, page hits per day, etc.

Many of the metrics that some folks want us to believe capture social media ROI — like brand awareness, brand affinity, engagement, etc. — are output metrics. In and of themselves, the have no financial return.

Impact metrics are those with financial return. They capture the amount or increase in sales per some unit of measurement, or they capture the reduction in cost of doing something per some unit of measurement.

There is an infinite number of input and output metrics that you could come up with. Not so with impact metrics.

Some of the social media gurus out there need to understand that there is a return on investment chain. You put things in, you get things out, and there is an impact — or maybe not, and possibly it takes a combination of the things that come out to achieve an impact.

The only way ROI can be measured is at the END of the chain. Most of your new metrics — engagement, likes, fans, etc. — are either input or output metrics, and do NOT (I repeat, do NOT) capture ROI in any way, shape, or form. There are a number of people in socialmediaville who disagree with me on this point. They redefine ROI, or come up with catchy alternatives like Return On Influence. They’re simply being Really Obnoxious & Ignorant.

If your social media efforts improve brand awareness, and you don’t — or can’t — track how that brand awareness translates into increased sales, you haven’t measured the ROI of your social media efforts, and you can NOT claim that your social media efforts had a positive ROI. The definition of ROI is not open to interpretation or redefinition.

———-

There’s another issue lurking under the covers of the “what’s the ROI of social media” question: The fallacy of trying to measure the ROI of infrastructure.

Q: What’s the return on the servers, routers, and computers your organization uses? A: Zero. In and of themselves, they produce no ROI.

Could your company achieve an ROI on many of its initiatives if it didn’t have these servers, routers, and computers? No. As a result, we consider those things to be infrastructure. And by definition, there is ZERO return on infrastructure investment. There is only an ROI on the actions you take, and the investments you make, that utilize that infrastructure.

There’s a pretty good argument to be made that social media is infrastructure. Part of a marketing, or better yet, customer relationship infrastructure, that organizations need to have.

ROI doesn’t come from having a Facebook page that’s liked by a million people. ROI comes from the sales and behavioral changes that are influenced by a Facebook page that’s liked by a million people.

In other words: It’s what you do with your Facebook page that produces an ROI. The messages and actions you take on Facebook that produces an ROI would likely produce an ROI in other channels, as well. Maybe not as high an ROI, but maybe higher. You won’t know until you test it.

This is why the whole “ROI of channels” discussion is so stupid. There are multiple factors that influence the ROI of an action. The channel in which the action is taken is just one. Attributing (or blaming) the result on the channel is simply wrong, wrong, wrong.

Bottom line: Feel free to spout off silly ideas about what social media ROI is, like Social Media Examiner does. It’s sure to get you thousands of page views on your blog, and tons of tweets. But please don’t relay those concepts to the CEO and CFO (and hopefully, CMO) of your company. You’ll sound stupid. I guarantee it.

p.s. For a really good discussion on social media ROI, see this post on the {grow} blog, and this one on The Harte of Marketing.

Customer Service Is NOT The New Marketing

One of the blurbs in a recent Adweek/IQ Daily Briefing email posed the question: Is customer service the new marketing? According to the email:

Nowadays things are changing. Customers looking for products simply type them into Google. Assuming you can get customers this way, the hard work then begins: keeping them. That’s why, according to Andy Ridlinger, customer service is now serving the role of marketing. “Companies need to start treating customer service as an investment rather than an expense. The necessary ‘white glove’ level of service required to create raving fans is more expensive in the short-term, but in the long term you not only spend less supporting current customers,” he writes. “Their free word-of-mouth marketing will help you add more customers.”

My take: This is a ridiculous, terrible, and misguided idea.

If customer service is now serving the role of marketing, then who’s planning and executing campaigns? Who’s determining the allocation of marketing spend across channels and programs? Who’s figuring out which customers the firm wants as “raving fans” in the first place?

The customer service department? The department in which it isn’t unusual to see 40-60% annual turnover among personnel? The department that is increasingly outsourced to some offshore service provider?

I need to stop, I’m laughing too hard to continue.

The notion that “customer service is the new marketing” is the epitome of cumbaya marketing. “Let’s just be really really nice to customers and they’ll tell all their friends and we’ll make money hand over fist.”

There are a host of problems with the notion of the customer service department as a profit center. First and foremost is that it can incent service reps to focus on selling rather than problem resolution. Example: How many times have you called in to your credit card provider with a question or problem, only to be barraged with balance transfer offers? Second, the people who are trained to resolve problems often lack knowledge about the products, and typically aren’t trained in selling techniques in the first place.

I don’t mean to diminish the importance of great customer service. In the scheme of things, it’s critical to retaining customers — and establishing a reputation that helps win new customers. But it isn’t the same as marketing. And it hardly is a substitute for existing marketing functions.

I expect to hear crap like “customer service is the new marketing” from vendors trying to hawk their technology, and from people who have a year or two’s worth of business experience. But for Adweek to legitimize this notion is irresponsible. And I couldn’t help but groan when I saw a Web page recently for a conference called Customer Service Is the New Marketing (I won’t honor it with a link). It pained me to see somebody I respect on the speaker slate.

This really begs a deeper, more fundamental question: Why would Adweek put this in its daily briefing in the first place?

The answer, I’m afraid, has to do with a lack of understanding of, and often disagreement about, what marketing is in the first place. Unlike the accounting or manufacturing functions in an organization — which are well defined, understood, and undisputed — the marketing function is often interpreted differently by different types of execs and by different types of firms.

It’s what I’ve referred to as marketing’s civil war — the culture clash between the brand-oriented marketers and the quantitative-oriented marketers. The customer service-oriented marketers are just a new faction — thankfully unarmed, underfunded, and for the most part unable to participate in this war.

Technorati Tags: , ,

June 4 Update: I’m not going to change the original text of the post, because what’s out there is out there, but I’d like to apologize to Adweek and retract the comment about it being irresponsible for including this in its email. The original email asks “Is Customer Service The New Marketing?”, not states “Customer Service Is The New Marketing.” The rest of my thoughts on the topic remain unchanged.

Bank Satisfaction: Up Or Down?

American Banker reported on JD Power’s 2008 Retail Banking Satisfaction Study, which surveyed more than 19,000 people in January of this year. According to the article:

“Rising fees and poor complaint resolution were people’s chief gripes in a retail banking customer satisfaction survey that gave the industry poorer grades than a year ago.”

Sounds reasonable.

But what the article failed to mention was that in February, American Banker reported on the latest American Customer Satisfaction Index study which found that, in a survey of 18,000 consumers, satisfaction with banks was higher than the previous year.

So, is satisfaction with banks up or down?

My take: It’s hard to believe that consumer satisfaction with banks is up. The impact of the credit crisis, rising fees, and tough economic conditions overall have been building for a while now. The Consumer Confidence Index has steadily declined for at least a year now. It’s hard to believe that any consumer-focused industry would be experiencing increasing satisfaction in this environment.

Debating if overall satisfaction is up or down, though, obscures some more important questions:

1) What’s Wachovia doing right? As banks’ index dropped 3.5%, Wachovia’s score increased by about the same percentage. In the ACSI study, banks as a group scored 78. Excluding the five largest banks — of which Wachovia is one — the score was 80. Wachovia’s performance flies in the face of other firms’ declining scores, and is in sharp contrast to the other large banks which dragged the industry down.

2) What’s up with credit unions? According to the American Banker article, the JD Power study included credit unions, which “accounted for 9 points of the drop in this year’s overall score.” That’s very counterintuitive, and seems to contradict plenty of press releases from CUs themselves touting their astronomically high member satisfaction rates.

3) Is satisfaction the right thing to measure?
Trust me, the last thing I want to do is give the Net Promoter Syndrome sufferers an opening here, but we’ve got to face the facts: If two large-scale studies that purport to measure “customer satisfaction” with banks can produce directionally-different results, maybe there’s something wrong with the measure that’s being used.

Bottom line: While you can’t blame the firms whose satisfaction scores increased for tooting their horns, I do hope that behind closed doors that the banks are giving a bit more scrutiny to the JD Power and ACSI findings, and doing what any good marketing analyst would do: Trying to accurately attribute the change in results — whether negative or positive — to the factors that influenced those changes, whether they be internal effects (like improved or diminished service levels) or external factors (like economic conditions).

Technorati Tags: , , , , , ,

How Do YOU Measure Customer Lifetime Value?

In the October 2007 issue of Harvard Business Review, an article titled How Valuable Is Word Of Mouth discusses the distinction between customers’ lifetime value and referral value. The article say this about CLV:

Estimating a CLV is relatively straightforward. The value to FirmCo of all that Mary will ever buy equals the amount that her purchases will contribute to FirmCo’s operating margin minus the costs of marketing to her.”

Oh really? And what should we do with the costs of providing service to Mary during her tenure as a customer? Ignore them? Assume they’re equal across customers?

The article goes on to say:

No one really knows how much Mary will buy from FirmCo in the future, but we can make an estimate by analyzing her past purchases over some period of time…then projecting that pattern forward using sophisticated statistical models.”

This is a fairly common practice in many firms. This approach ignores two important factors, however:

1) Life stage events. In research I did that looked at the impact of moving on consumers’ purchase habits, I found — not surprisingly — that consumers who move make a lot of purchases in and around the time of their move. But, more interestingly, many consumers change their ongoing purchase habits — sometimes spending more, sometimes spending less — in many product/service categories, depending on the reasons for the move. Most CLV calculations, even those employing “sophisticated” models, miss these events.

2) Moments of truth. A term coined by McKinsey, these customer interactions leave an indelible mark on the relationship. If positive, they can amplify the relationship — if negative, they could kill it altogether. I’m not aware of any firm that explicitly incorporates the possibility or likelihood of these “relationship disrupters” in their CLV calculations.

The HBR article makes a case for why — and how — to incorporate referral value into a CLV calculation. But marketers should also: 1) incorporate service costs; 2) account for life stage events; and 3) model for relationship disrupters.

Technorati Tags: , ,

Behaviors Versus Intentions

A colleague forwarded me two emails he recently received — one from Marriott, the other from Capital One. The common thread: Both firms asked (and incented) him to refer his friends and family to those firms.

Seems common and simple enough, no? Yet, when you think about it, both firms are bucking the trend.

While there are (seemingly) throngs of Net Promoter Syndrome sufferers out there spending buckets of money to collect, analyze, and disseminate data about their customers’ intentions, along come two smart marketers trying to influence (and presumably measure) actual customer behavior.

What makes this approach so much superior to the NPS methodology? It:

  1. Is simple (“how many referred us?) — not like a 10-point scale
  2. Can be measured in real-time — or, at least, more often than surveys
  3. Can impact all customers — not just a sample
  4. Drives (and measures) behavior — not intentions

    and the absolutely best reason….

  5. Directly impacts the bottom line — not indirectly, through correlation

Measuring NPS is a huge waste of money. Why ask customers about their likelihood to refer, when you could be asking them to refer?

And I’d like to thank my colleague, who knew I’d find this blogworthy. Unless, of course, he was hoping that I’d apply for another credit card.

Technorati Tags: , , ,

Customer Engagement Is Measurable

In a recent post, Avinash claims that engagement is not a metric, and writes:

Engagement is not a metric that anyone understands and even when used it rarely drives the action/improvement on the website….It is nearly impossible to define engagement in a standard way that can be applied across the board.”

My take: I think Avinash is taking an uncharacteristically narrow view of the term engagement, and the ability to measure it.

The biggest issue with the way the term engagement is used in the marketing community is its narrow connection to websites and the online channel. When marketers think of “customer engagement”, they should be thinking about how engaged the customer is with the company, product, or brand. The level of involvement with the website — or with a particular ad (online or offline) — is just one dimension of a customer’s engagement.

Customer engagement encompasses a number of dimensions:

  1. Product involvement. A customer who doesn’t care about the product, is likely to be less committed or emotionally attached to the firm providing the product.
  2. Frequency of purchase. A customer who purchases more frequently may be more engaged than other customers.
  3. Frequency of service interactions. Branding experts like to say that repeated, positive interactions lead to brand affinity. And they’re right to a certain extent, but….
  4. Types of interactions. …not all types of interactions are created equally. Checking account balances is a very different type of interaction than a request to help choose between product or service options.
  5. Online behavior. Time spent on a site might be very important. But, like types of interactions, not all web pages are created equally.
  6. Referral behavior/intention. Customer who are likely to refer a firm to friends/family might be more engaged — a customer who actually does refer the firm, even more engaged.
  7. Velocity. The rate of change in the indicators listed above may be a signal of engagement.

Avinash is on the right track, however when he says that it is nearly impossible to define engagement in a standard way. I would suggest, though, that a standard definition is feasible — but that measuring it in a universally standard way is what’s impossible.

And that’s good.

Who said we need a standard way of measuring engagement? This insistence on a standard definition and approach is to measurement is silly. You don’t hear anyone getting all worked up about the fact that market share can be calculated any number of ways, and that the denominator in that metric is hardly consistent or easily measured.

Measuring engagement needs to be done in the context of a firm’s strategy and it’s own theory of the customer — that is, what behaviors the firm believes constitutes an engaged customer.

Measured correctly, engagement meets one of Avinash’s golden rules — to me instantly useful. Using market research data, I measured customer engagement with their banks using the attributes described above.

I then segmented the respondents into four categories, based on their level of engagement, and the breadth of their relationship with their banks (based on the number of products owned). The result: A metric that is immediately useful in helping marketers address some strategic questions about their marketing and customer strategy.

engagement2.jpg

Marketers need to stop getting their knickers in a knot trying to boil engagement down to a single metric that relates to a web site or the online channel. It’s a descriptor of a customer’s attitudes, not a channel’s performance.

A metric, when used appropriately, can help execs make decisions and manage. But considering the way engagement is being defined and measured today, it’s no wonder Avinash has come to the conclusions that he has.

Technorati Tags: , ,

A Dinosaur Banking Metric

On bankstocks.com, two First Manhattan Consulting Group execs wrote:

One metric stands out as being highly correlated to growth in a bank’s shareholder value: Same-store deposit growth. Banks that consistently generate strong same-store deposit growth in their mature branches tend to generate strong growth in other relevant measures, as well.

And on gonzobanker.com, Terence Roche wrote that, according to this year’s upcoming Cornerstone Report, “the 64 accounts opened at branches were offset by the closing of 55. That’s a net growth in deposit accounts of nine per branch per month.”

My take: For many banks, same-store deposit growth is a metric of the past, and should be put out to pasture.

Mr. Roche appropriately points to factors that may be artificially depressing net accounts opened per branch, like consolidation of accounts, assignment of new account numbers, and what he calls the “privilege pay” and “CD promo” factors.

A good list, but I think he missed the most important factor: the Internet.

According to NetBanker, the 23 large FIs that it tracks have been averaging about 475,000 deposit accounts (checking, saving, high-yield) opened online per month for the last four months.

While the size of those banks vary, on average we’re talking about one-quarter of a million accounts opened online per bank per year. If you’re a PNC, with about 1000 branches, that’s 250 additional accounts per branch. And that, I bet, would have a significant impact on same-store deposit growth and Cornerstone’s report findings.

Oh sure, there may be a few banks that attribute online applications back to a branch. But even for those that do, the metric is distorted by branches credited with results they might not have contributed to.

And conversely, it’s quite feasible that good in-branch service strengthened the relationship with customers who opened additional accounts online — which weren’t attributed back to those branches.

The bottom line: Same-store deposit growth is simply not a relevant metric anymore. The extent to which assumptions would have to be made to improve the accuracy of the metric isn’t worth the cost and effort.

I don’t doubt FMCG’s analysis that the metric is highly correlated with shareholder return and other ROI-oriented metrics. But — for the gazillionth time — correlation does not equal causation.

For all the talk in the industry about being customer-centric, here’s a great place to start: Stop measuring same-store deposit growth.

Technorati Tags: , , , ,

Why Do Marketers Test?

Jim Novo recently commented that few online marketers deploy testing the way it’s often done in the offline world. Jim speculates that the reasons for this include cultural issues and a lack of ideas about what meaningful tests to conduct. For me, this raised a more fundamental question:

Why do marketers test in the first place?”

You could argue that the answer to that is simple — to increase the effectiveness of marketing programs. But I think there’s another side of the coin: To improve the efficiency of marketing programs.

In the “old” world of direct marketing, where direct mail costs are significant, marketers test to determine who not to mail to. But in the online world, where the incremental cost of sending out one more email is practically non-existent, suppressing marketing messages is less of an issue.

So why should online marketers bother to test? If online marketing response rates are higher than direct mail response rates, and little opportunity to reduce campaign costs, then marketers will have little incentive to test.

The distinction — and balance — between effectiveness and efficiency is subtle. But marketers who recognize that their testing approaches have been more focused on efficiency than effectiveness will realize that they’re missing many opportunities to create and execute a strategic test and learn agenda that not improves effectiveness but drives marketing strategy.

The notion of a test and learn agenda isn’t new. But many database marketers’ agenda is undeveloped, underdeveloped, or misguided. Often, testing plans are focused on short-term and tactically-oriented questions like who should be mailed to, and which messages work better than others.

There’s a bigger opportunity here, specifically, to test to help answer more strategic marketing questions like:

  • What number of touches is best for which customer segments?
  • How can a sequence of messages help lift response?
  • How does time between touches effect response and conversion rates?

The opportunity to be more strategic with testing isn’t limited to marketing effectiveness. In many firms, it falls on the shoulders of market research to answer questions about consumer behaviors and attitudes. But far too often, market research is burdened with addressing tactical issues. Database marketers can step in and help here, and devise tests to help understand:

  • Which customer behaviors are most closely correlated with response and conversion and help define an “engaged” customer?
  • What the optimal spend per customer to increase customer engagement?
  • What is the profit per customer impact of increased customer engagement?

Database marketers — or online marketers, for that matter — won’t be able to answer these questions if their testing approaches are limited to figuring out who not to touch. Or if they don’t test at all, for that matter. Marketing needs a new mindset about testing, and break out of the confines of campaign-centric ROI test and measurement.

Technorati Tags: , ,

Announcing A New NPS Metric

Sufferers from Net Promoter Syndrome share a common symptom with those afflicted with another marketing malady called Simplificosis, the tendency to believe that just because a marketing or management metric is simpler to understand or measure than other metrics, then it must be better.

I understand that companies often make things more difficult than they need to be and that simplification has its benefits.

Example: Applying for a mortgage. The complex way: Filling out a 15 page form and waiting three weeks for an answer. The simple way: Answering three questions (what’s your name, what’s the address of the home you’re looking to buy, and how much money do you make) and getting your answer immediately.

But sometimes, simplification isn’t an improvement.

Example: Getting directions from Boston to New York. The complex way: Take the Mass Pike to the Rte 84 exit, merge onto Rte 15 south towards I-91, go east on …. and so on. The simple way: Go southwest. Correct, simpler, but not exactly an improvement.

This is the trap that Net Promoter Syndrome sufferers fall into. Paul Marsden, writing on his Viral Culture blog, wrote “the simplicity of the model…has made research intelligible at the board level.”

It’s intelligible, however, not because it’s right, but because Reichheld knows how to communicate with senior management. I hate to say it, but many market researchers don’t. Citing margins of error, R-squared scores, etc. doesn’t resonate with a lot of senior execs.

But the reality of the Net Promoter Score is that it’s really not that simple. The common practice is to only consider customers as promoters if they give a 9 or 10 on the 10-point scale. But in comparing NPS between time periods, it’s quite possible that the net score could increase while a significant number of customers shift from 7s and 8s to 1s and 2s. Not so simple, after all.

Bottom line: Simpler doesn’t mean better, nor does it make it “more right” than complex.

But hey, if it’s simpler you want, it’s simpler I’ll give you. Here’s a new metric for you. And since it might be too complex for some people to remember a new acronym, my new metric will keep the NPS moniker. Announcing the new NPS:

Net Purchaser Score — The net difference between the number of people who bought your product and the number of people who returned it.

I haven’t done the “research” yet, but I just know that my NPS will correlate with revenue and profitability growth. And the beauty of my metric is that it:

  1. Measures behavior (not intention)
  2. Encompasses all customers (not just a sample)
  3. Directly impacts the bottom line (not indirectly)
  4. Can be measured in real-time (or at least more often than surveys)
  5. Is simple!

As soon as I finish the “research”, I’ll publish the book (and I’ll even send you a signed copy if you leave a comment here). And I’ll expect all the Net Promoter promoters to drop their support of their metric, since something better — and simpler — will have come along.

Technorati Tags: , ,

p.s. Check out Adelino’s take on this

The ROI On Brand Versus The Value Of Brand

I had an email exchange recently with a friend who raised some excellent questions and views on the topic of branding that I thought I’d share here (with permission, but anonymously):

Our clients often ask how to measure the ROI of branding efforts. Maybe the question needs to be redefined. Instead of “what is the ROI of brand,” maybe it should be “what is the overall value of the brand?” Instead of asking, “How does brand create revenue?” the question might be, “How does brand contribute to my balance sheet?”

We intuitively know there’s value in branding, but the bean counters want some dollar-for-dollar ratio on their investment — and I don’t think it works like that.

Two firms of the same size could both spend $500,000 on branding but have radically different results. It hinges a lot on how they deliver the brand, the messages in their materials, the media they choose, the markets they serve, etc. But mostly, I’d say it hinges on how well they execute and deliver — especially in first-hand interactions.

Many firms have the misconception that when they complete a branding project that the work is done. “Whew, glad that’s done. Now let’s watch the money roll in.”

We show them a picture of a newborn baby and explain that it’s only the beginning. We tell them it’s like they have an infant that they will have to nurture with time, energy and money if they want to see it grow up to be a mature, responsible brand that makes positive contributions.”

My take: My friend is on the right track by reframing the brand ROI question.

A brand is a lot like the servers in your data center. They’re infrastructure — they’re something you build applications upon. In and of themselves, they produce no return on their investment — you have to do something with them to generate a return.

It’s the same with brand. Brand can create awareness, expectations, and even intention…but it doesn’t close the sale. Something (or things) else does that — or, at the least, contributes to that. Which means you cannot calculate the ROI of brand. With carefully designed and executed tests, perhaps you could measure the contribution to sales that branding investments make, but few (if any) firms seem willing to take that route.

Over the past 20 years, CIOs have gotten a lot smarter about how to craft and justify their IT infrastructure investments. It took a lot of work on the part of the more successful CIOs to demonstrate how IT infrastructure enables and supports current and future business capabilities.

CMOs need to take a similar approach, and treat their investments in brand as infrastructure, and demonstrate how those investments enable the sales and marketing capabilities their firms develop. The free ride (i.e., spuriously linking brand investments to changes in sales) isn’t going to last forever.

Technorati Tags: , ,

For some great insights into the topic of brand and branding, see Jim Novo’s Marketing Productivity blog.