Marketing and Research Consulting for a Brave New World
Subscribe via RSS

Marketers are getting a hodgepodge of signals from various partners about what works.  These signals use different models and testing approaches. Some methods operate in the aggregate (MMM), some at a fuzzy cohort level (walled gardens), and some at an ID level (MTA). Some signals come from behavior and some from surveys. Some, like Google offer you the option to choose your own model, last click, first click, algorithmic, data driven.

This messiness sounds like this…

  • “When we added up the individual pieces, we got 160% of what we knew the overall campaign effectiveness to be!”
    • CMO, large financial institution
  • “80% of marketers report that they receive conflicting results from different models”
    • MMA survey of marketers
  • “We got a surprising result from MTA on the value of search relative to our A/B tests. No one believed the findings”
    • Head of analytics, large marketer
  • (And a call for more measures) “We need a research system that will quell the panic when we don’t see a pop in sales in the first two weeks of a campaign.”
    • Marketing VP, Telecom

What is needed is a new kind of approach that is built for integration …not another model to estimate parameters from digital exhaust or spreadsheet data.

What will an integration model require?

  1. The marketer must take control.  When there are diverse sources of effectiveness, the buck stops with you.
  2. You must estimate the lift of the campaign in total.  This is the boundary condition that will constrain the estimates of individual ad tactics from over-counting effectiveness.
  3. Measures need to be harmonized. Whether you are getting a read on effectiveness across a broad range of tactics from one provider (e.g. DISQO with its fully permissioned platform that can see inside of Facebook, Amazon, etc.) or using modeled integration of diverse signals, this is a must to achieve a cohesive view of what works.
  4. Differentiate how you interpret A/B tests that are “per protocol” vs. “intent to treat”. Intent to treat more properly accounts for the spending weight of a treatment while per protocol is as if you achieved 100% reach with that treatment.  However, per protocol can generate a better “apples to apples” comparison across tactics that is not biased in favor of the tactic that happened to get more media weight (assuming the control cell is properly matched).
  5. A functional form bounded between 0 and 1. If you are estimating a percent conversion, regular linear regression doesn’t work. The right functional form will help with the problem of “individual estimates adding up to more than the campaign lift”. And treating the dependent variable as a percentage offers significant mathematical advantages.
  6. (Optional but recommended) Use Bayesian and Nate Silver methods for smoothing estimates. If you get estimates of a tactic’s effectiveness from different sources, usually one is somewhat more trusted than the other.  Give weights to these estimates like Nate Silver differentially weights the value of different poll results. Use Hierarchical Bayesian shrinkage methods when you have a strong accumulation of prior evidence that current results might conflict with.  Baseball Moneyballers use Bayesian methods to forecast end of year results rather than looking at April performance.

Marketers will continue to receive disparate signals like:

  • MMM informing the big buckets such as linear TV, promotion and pricing, digital
  • MTA estimating effectiveness for specific display and video tactics by desktop vs. mobile, across open web programmatic, some social, and CTV (and maybe linear TV if you use smart TV data)
  • Separate A/B tests telling us about Facebook, YouTube, Amazon, and other platforms
  • Click-through reports telling us about search and display ads to brand.com (but only a portion of sales accounted for by visits to our site)
  • Survey based measures of brand lift

I think signals will only proliferate as media and AdTech companies are increasingly pushed towards proving that advertising with them delivers results. If I’m right, the path forward will require more inclusive, fully permissioned user level data like DISQO offers and a new kind of model built for integration.

(Please contact me joel@rubinsonpartners.com  if you are interested about this challenge or want to talk about what an integration model might look like).

A/B testing can adjudicate a controversy about directing ad investment. But if not done right it becomes fake news.

Read more ...

Long vs. short term effects of advertising come from different consumers. Are you tracking campaigns the right way?

Read more ...

My body of most disruptive work, all in one place, available for free download

Read more ...

Turn trackers into a predictive tool… input ad spending increases and alternative attributes

Read more ...