As the ORQC continues its journey towards a version 1.0 solution by September, I must reiterate that the number one issue we face in marketing research projects is data consistency.
Consistency is of critical importance in marketing research projects because we are often comparing and trending results. There is no behavioral truth for most of what we measure. Primarily, in marketing research projects, we are measuring attitudes…the percent that care about the environment, awareness towards brand X, purchase intent to new product idea Y, etc. Consistency is critical.
Yet, we know from the ARF million dollar research-on-research study that there are ways that project results can become horribly inconsistent with prior waves of results and with norms. Furthermore, there are at least three other major R&D projects that reached the same conclusion.
The ARF Foundations of Quality study tested the exact same questionnaire across 17 different panel companies in the US, including all of the leading panel providers, and we found that there were differences across panels that could not be explained by statistical variation. More sobering news—while we found variables that correlated to purchase intent answers, there was no data weighting scheme that made the differences go away. We saw NO EVIDENCE that you can make results equivalent by weighting on longevity on the panel, on demos, or only choosing panels that do NOT use cash incentives, etc. These variables matter but differences across panels still exist.
Furthermore, as panels are merged together (there were recently some big mergers) or as a panel changes its recruitment patterns over time, the same panel is not necessarily the same from a data consistency point of view over time.
ARF findings are supported by the work of others. Ron Gailey, while he was at WaMu, found huge differences across panels, but traced that back to longevity. Steve Gittelman from MKTG, Inc. has found big differences across over 100 panels internationally, as well as in the US. I found big differences in reporting in 1994 as a function of longevity when I was at the NPD Group.
The difference is that the ARF findings are a bit more challenging in that a way to post-stratify to make these differences go away was not discovered. The good news is that we are planning to tackle this head-on in our version 1.0 program for managing data quality which is targeted for late September. Working with ARF staff, our working sub-committees of industry leaders are crafting recommendations that will include templates, training materials, and clear definitions so buyers and sellers can work together to stabilize sampling sourcing, reports and monitors, and create a checklist of procedures that panel companies can closely examine and stabilize to maximize data consistency.