Marketing and Research Consulting for a Brave New World
Subscribe via RSS

A funny thing happened on the way to the eulogy; the patient got better!

Kim Dedeker who once publicly characterized our industry as, “we will be on life support by 2012” last Wednesday at ESOMAR in Chicago said, “Now we can ask, ‘how high is up’?”

What changed?  The industry stepped up to the plate and showed the fruits of effective, forceful leadership.  I presented the ARF’s progress, where industry leaders created a Quality Enhancement Process (QeP) that is modeled after the collaboration and transparency of “Category Management”, and is rooted in fact-based insights from our Foundations of Quality research.  Eight leading buyers (Bayer, Capital One, Coke, General Mills, General Motors, Kraft, Microsoft, and Unilever) are beginning their pilot tests of QeP, some of which will test how the solutions can be used in combination with each other.  For example, one pilot test plans to integrate QeP, MKTG, Inc.’s “grand mean” approach, and TruSample 3.0. Updates on ISO and ESOMAR efforts were also presented as part of the same panel.

About 100 people physically or virtually attended the ARF ORQC meeting the next day in Chicago, where we gave a progress report on setting up the pilot tests and a more detailed view into the templates. QeP is starting to show up in RFPs; while that is premature until the pilot testing is completed (January, 2010), it is a marker for the traction that QeP is getting.  Steve Coffey, NPD Chief Research Officer and co-chair of the ORQC presented a strawman for a new potential working committee structure that will better serve the next stage in the industry progression:

  • QeP program committee; synthesize learnings from the pilot testing and continue to evolve the QeP templates.
  • Research on Research committee; continue to mine the enormous amount of information inside the FoQ and issue updated analysis
  • Methodology advisory board would be formed to ensure there is objective, inclusive scientific thinking and orchestrate a dialogue. 
    • Additionally, the research on research committee and methodology advisory board will work together to frame another round of research on research around remaining, critical knowledge gaps.

To address the issue of whether there is a science to online panels, the latest ARF ORQC council included a panel of four leading research scientists that was moderated by Bob Lederer.  Panelists were:

  • George Terhanian – President of Global Solutions, Harris Interactive, Inc.
  • Charles DiSogra – Chief Statistician, Knowledge Networks
  • Steve Gittelman – President, Mktg, Inc.
  • Doug Rivers, CEO, Yougov America

Some key points:

  • Scientists and buyers unanimously agreed that, with proper procedures, quota samples from double opt-in online research panels can produce reliable and consistent (MKTG, Inc.’s main focus) data, which make them a valid choice for tracking research and concept testing.  (Defining such ‘proper procedures’ is what QeP is all about).
    • This is important because there is no other mode that offers the scalability the industry requires.
    • One leading buyer said a debate about the legitimacy of online research and quota samples was “revolting” and we need to move on.  He pointed out that quota sampling has a long history of being fit for purpose in marketing research in the US and in many other parts of the world.
  • There was less agreement on accuracy.  DiSogra claimed that online access panels that aren’t randomly recruited exhibit unknown biases so, while they can be fine for tracking research, their accuracy cannot be trusted.  Doug Rivers (an architect of the 2004 Stanford study)   and George Terhanian each showed evidence from the 2004 Stanford research that the difference in accuracy between RDD and online access panels was only a few percentage points apart. Doug also pointed out that mean squared error is a function of sample variance as well as bias so in practice, given the extremely high relative cost per interview of RDD-based interviews, there might be less error with properly executed, access panel research where bigger sample sizes become affordable. 
    • I think one of George Terhanian’s slides summed this up nicely:
  • A key to making population wide inferences through survey research lies in reducing or eliminating any bias between the sample, no matter how it is generated, and the target population.
  • For companies that depend on online panels built by means other than probability sampling this can be hard work.
  • For companies that depend on probability sampling, this can be hard work as well, as the achieved sampled may bear only a faint resemblance with the target population due to coverage error, non response, and the various other factors that have contributed to the decline of telephone and F2F research and the ascent of online research.

The ARF continues to be a place where all viewpoints and learning can converge.  Regarding Terhanian’s second point about this being hard work …hopefully, the QeP can do a lot of the heavy lifting and we should all know for sure by January 2010.

Tags: , ,

Comments

Comments are closed.