Does your organization use a marketing mix model or some other sophisticated analytical tool to assess the overall return on your marketing investment?
If not, you may want to learn a bit more about them. They are wonderful tools for understanding which components of your tactical marketing plan have been driving sales, and which less so. The benefit is an ability to tune or optimize your tactical mix to achieve better results for every dollar spent. In fact, a recent study by the Marketing Leadership Council found that these analytical methods can often help identify 5% to 25% of total marketing spending as unproductive, and suggest ways to reallocate it to achieve much greater impact in the marketplace.
But if you're already using mix models, be careful. Based on our observations in working with several dozen Global 1000 companies, 8 out of 10 marketers using mix models today are using them in dangerously incorrect ways.
Not that the math is wrong. And most of the time the analysis itself is sound. No, the problem lies much more in the conclusions they are drawing from the model outputs.
Case in point... One multi-billion dollar global marketer whose brand we all know and love has a fairly sophisiticated mix model that helps them understand exactly how much "incremental" revenue is being produced each year due to the marketing and advertising tactics, and thereby calculates an "ROI" on the marketing spend.
So, what, you're probably asking, is wrong with that?
Simple. There's no consensus around the company outside the marketing department that the calculations are valid. Consequently, there's no agreement that the ad spend is truly driving "incremental" results at all, or that the ROI caluclations are valid. Finance wasn't consulted when the model was constructed, nor was Sales. In their eyes, the model was conceived as a means to "justify" the marketing budget. It is, to hear Finance tell it, riddled with "rediculous assumptions" of how variables relate to one another and completely overlooks critical factors such as channel contracts and long-term pricing agreements.
In short, the model, despite its techical sophistication, has little to no organizational credibility. It does nothing to build alignment on resource effectiveness or settle the endless debating that undermines productivity and forces the expenditure of disproportionate amounts of energy fighting internal battles versus marketplace battles.
But the real crime here is that, given the chance to participate openly in making the initial assumptions required to build the model, representatives from Finance, Sales, or Operations may have been willing to agree that the very same assumptions were in fact reasonable. Yet since they were excluded from the exercise, they were put in a position right from the start to view the results with cynicism instead of healthy skepticism.
The moral in the story is that analytics, as powerful as they are, rarely are built on perfectly objective and complete data. Assumptions are critically important in closing gaps in knowledge certainty. And whereever you have assumptions, you put credibility center stage. Building and maintaining that credibility is an organizational challenge, not a mathematical one. The two have to work hand-in-hand, or the analytics do nothing to help the company consolidate knowledge and move forward. The debate just rages on in a different direction.
What's your expereince been with analytics and organizational credibility? How do you ensure that your models will be both technically sound and embraced by key decision influencers outside of marketing?
Comments