Marketing and Research Consulting for a Brave New World
Subscribe via RSS

As the ORQC continues its journey towards a version 1.0 solution by September, I must reiterate that the number one issue we face in marketing research projects is data consistency.

Consistency is of critical importance in marketing research projects because we are often comparing and trending results. There is no behavioral truth for most of what we measure. Primarily, in marketing research projects, we are measuring attitudes…the percent that care about the environment, awareness towards brand X, purchase intent to new product idea Y, etc. Consistency is critical.

Yet, we know from the ARF million dollar research-on-research study that there are ways that project results can become horribly inconsistent with prior waves of results and with norms. Furthermore, there are at least three other major R&D projects that reached the same conclusion.

The ARF Foundations of Quality study tested the exact same questionnaire across 17 different panel companies in the US, including all of the leading panel providers, and we found that there were differences across panels that could not be explained by statistical variation. More sobering news—while we found variables that correlated to purchase intent answers, there was no data weighting scheme that made the differences go away. We saw NO EVIDENCE that you can make results equivalent by weighting on longevity on the panel, on demos, or only choosing panels that do NOT use cash incentives, etc. These variables matter but differences across panels still exist.

Furthermore, as panels are merged together (there were recently some big mergers) or as a panel changes its recruitment patterns over time, the same panel is not necessarily the same from a data consistency point of view over time.

ARF findings are supported by the work of others. Ron Gailey, while he was at WaMu, found huge differences across panels, but traced that back to longevity. Steve Gittelman from MKTG, Inc. has found big differences across over 100 panels internationally, as well as in the US. I found big differences in reporting in 1994 as a function of longevity when I was at the NPD Group.

The difference is that the ARF findings are a bit more challenging in that a way to post-stratify to make these differences go away was not discovered. The good news is that we are planning to tackle this head-on in our version 1.0 program for managing data quality which is targeted for late September. Working with ARF staff, our working sub-committees of industry leaders are crafting recommendations that will include templates, training materials, and clear definitions so buyers and sellers can work together to stabilize sampling sourcing, reports and monitors, and create a checklist of procedures that panel companies can closely examine and stabilize to maximize data consistency.

Tags: ,

Comments

2 Responses to “Achieving Consistency in Research Results”

  1. i find the result about weighting to be so interesting. so many people assume weighting solves everything. there are just so many variables that simply can’t be accounted for that weighting can never possibly hope to overcome them. i hope people take finding to heart. the final answer seems to be ‘know thy panel.’

  2. Bob Harlow

    We cannot hope to attain consistency (or what us psychometricians call “reliability”) unless and until we know that (1) respondents are recruited in a similar way across panels, giving us a stable sample composition and (2) respondents interpret the questions in a similar way. This second condition requires, of course, that respondents give survey questions their full attention and thus makes online research quality extremely difficult to attain. In most cases, respondents have no reason to carefully read or pay attention to questions in online surveys – unlike phone surveys where an actual person could hear one’s responses and (potentially), the respondent feels some pressure to be attentive and engaged. Longer online surveys, of course, exacerbate the issue.

    Unless and until we tackle the issue of respondent attentiveness, consistency will remain elusive. There is no reliable statistical “fix”, because you are just playing with random noise, not responses with any underlying meaning. Weighting is one red herring among many. We need a new model of respondent engagement.