Marketing and Research Consulting for a Brave New World
Subscribe via RSS

Yes, insights are powerful but there is a downside. Focusing on “insights” as the endgame might be a barrier to what we must do…embrace big data.

So if “insights” aren’t the end game, then what is? In the age of data driven marketing, we need to find the prediction question in every study and address it.

When you are in the prediction business, you are trying to predict unknown values of importance to the enterprise.  It might be a prediction about the future share of a brand, identifying which users are most likely to be in play from their cookies to deliver advertising selectively to the right user, at the right time on the right screen, or modeling what is the most relevant content possible to serve up a personalized experience. Or it might be how to most accurately predict who will control the senate, as Nate Silver just did using his data science-based approaches.

To be good at prediction, you will need to integrate as many data sources as possible to determine empirically which ones demonstrate predictive value. That is why prediction questions encourage big data approaches. No NIH, no statistical snootiness about whether or not the data came from a random sample (as if that really exists anymore…). A focus on prediction leads you to integrate data from different sources and score the usefulness of information based on its incremental prediction value.   Data science is an equal opportunity employer.  If the data make sense to use AND they have predictive value, they’re hired!

The prediction business is critical for marketers because it drives up marketing ROI in a repeatable way.  Consider the world of programmatic digital advertising.  For every million page view requests, algorithms are PREDICTING which one thousand should be targeted with your ad because they are most likely to respond.  Such targeting can be based on models that use surveys, clickstream patterns, social media profiles, demos, time of day, weather, etc. and has been proven to drive up marketing ROI.

A great example of moving to prediction-based thinking comes from Nate Silver, creator of the fivethirtyeight blog and author of the book, “The signal and the noise”. Also, acknowledged to be the most accurate source of election results predictions and he nailed it again last night.

Before Nate, political polling was centered on the single proprietary study. Each pollster ran their own poll, trumpeting its superiority, and implied who will win an election as if no other pollsters or predictive factors existed. Nate takes an unprejudiced view.  ALL polls have value and need to be weighted together but the weights are not equal…they depend on prior track record, sample size, “house effects” leaning towards one party vs. the other, etc.  Also, he doesn’t only use polls.  He finds that other factors add predictive value such as fundraising, candidate ideology vs. voter views, economic index, job approval ratings, etc.  In other words, each poll, for all its sampling purity is INADEQUATE on its own at maximizing prediction accuracy.  However, insights ARE important to framing his model.  He would not use some data stream that made no sense, regardless of statistical correlation, like which league won the World Series this year. What he does is essentially use big data principles.  He has Moneyballed political polling and is paid millions because he is the most accurate political forecaster on the planet.  I think marketing research practice needs to follow Nate’s footprints in the snow and go beyond the survey.

In marketing research, we can find the prediction question by thinking about the future, differentiating one user from another in terms of how they would respond differently to a marketing stimulus, or sales response to a marketing activity.

For example, when we make a trial forecast from a concept test for a health-oriented new product we do so without reference to prior studies. Purchase intent results are just accepted without adjustment or enhancement.  Is there really no Bayesian prior that we can extract from the hundreds of other concepts that were tested based on similar health claims? Also, we drop out at launch.  Using prediction approaches we could provide guidance to algorithmic media approaches to predict and target the likely users. Couldn’t we harness other predictive factors like frequent shopper data patterns for that user or possibly that those visiting retailer websites are more likely to try new things?

Another example is brand tracking.  Stop focusing on the report card and start thinking about brand health…predicting the FUTURE trajectory. To do this, certainly we need to include digital and social signals about the health and positioning of the brand. (Note: I am currently working with a leading supplier and have begun bringing this out to the marketplace.)

To become like Nate Silver, the Moneyballers, and the data scientists, Marketing Insight teams need to challenge themselves to find the prediction question in every study and commit to bringing together the data streams or conducting the experiments that are needed for prediction and then marketing action.

Tags: , , , , , , , , , ,

Comments

9 Responses to “Move beyond the insight to find the prediction question”

  1. So Nate has popularized meta-analysis. Academics have long used this method to better understand human behaviour. When 50 different scientists are working on similar projects, it makes lots of sense. In the money making world that is marketing research, meta-analysis means that many people have to run similar research studies on the same brands/categories/issues and share the results with each other.

    First of all, how many clients have that kind of money to run similar tests with a variety of researchers? And second of all, who has the time to focus on just one single research project for months or years at a time?

    I completely see the benefits of meta-analyis. It has huge value as a method for separating truth from error. I don’t trust anything more than a well done meta-analysis. But in our world, it’s just not going to happen. Speed and money are much louder.

  2. A great read, Joel. You do talk about these things as if no one in marketing research is combining survey and behavioral data – that isn’t the case. Check out this 3min video that summarizes how we combine behavioral data and survey data. http://www.marketingevolution.com/power

    Granted, we are probably the exception rather than the rule, but predictive analysis is happening with advertising today.

    What I found interesting about Nate’s work is the examination of the uncertainty with forecasts. His team openly admits that the predictions are NOT perfect. They are very good. It is better for our industry to understand that the nature of imperfection around forecasting. Take a business that is influenced by rain. Our analytic models are dependent on the perfection of the weather forecast. Therefore, we think about the risk/return ratio of taking action on something like the 10 day weather forecast, knowing that it will be wrong in a certain percentage of the case. How much better off is a marketer that uses the prediction (knowing it isn’t perfect, but is better than backward looking annual averages?)

    Once we transcend backward looking analysis in favor of predictive views, the focus shifts to the risk/return of good, but not perfect forecasts.

    I think you are creating an unrealistic expectation to call Nate’s work as perfect when his own website has a great discussion of the percentage of time they expect to be wrong. It is better for us to teach marketers how to be pr3edictive, with an understanding of uncertainty. I think you are the guy to help teach the industry this more nuanced way of living in a forward looking world — and I look forward to reading your future posts. I can predict that 99% will be awesome!

  3. Joel Rubinson

    great comments from industry leaders! Annie, I’m a big believer in meta-analysis. The first thing I did at the ARF was a meta-analysis of TV ad effectiveness across 388 cases and the results were quite surprising. The preso on slidshare is also my second highest viewed piece of content ever. Nate does more than meta-analysis, but you are right to draw the connection.

  4. Joel Rubinson

    part two…Rex, your video is brilliant. Your offering, and what I build for my clients is light years ahead of the mainstream. We need to get the broad majority of marketers up to this high water mark

  5. Dr. Brian Monger

    Of course insights are not the end game. It is what you successfully do with that data/information.

  6. Joel, great post. I feel like someone finally gets it. I’ve been working on this problem for several years and found that there isn’t a client that without strong priors. Too many people want to discount judgment because it’s not from a third party, not quantified, etcetera. What Nate and the Bayesian methods bring us is a way to think about integrating data and judgment. If it’s just data then, yes, it’s meta analysis. This is where decision making comes into play. People are making decisions, not data. Too many people want to check the box on data analysis and then overlay their judgment. That’s confirmation bias at work. Bayesian methods help overcome this. Data and judgment are combined at the time of the analysis, not sequentially. Better decision come from Bayesian decision analysis. Bravo on a great post!

    • totally agree. I think of it as updating priors. In fact, I have decided approaches for clients who have 1-1 marketing capabilities where each response (or lack thereof) represents a tweak to priors about their likelihood or response to a marketing communication of a certain type.

  7. […] Move beyond the insight to find the prediction question (Joel Rubinson) “In marketing research, we can find the prediction question by thinking about the future, differentiating one user from another in terms of how they would respond differently to a marketing stimulus, or sales response to a marketing activity. For example, when we make a trial forecast from a concept test for a health-oriented new product we do so without reference to prior studies. Purchase intent results are just accepted without adjustment or enhancement. Is there really no Bayesian prior that we can extract from the hundreds of other concepts that were tested based on similar health claims? Also, we drop out at launch. Using prediction approaches we could provide guidance to algorithmic media approaches to predict and target the likely users.” […]