Marketing and Research Consulting for a Brave New World
Subscribe via RSS

On Tuesday, the ARF Online Research Quality Council presented detailed findings from an unprecedented US R&D project regarding online data quality, called “Foundations of Quality” (FoQ).  Beyond the fact that it was about $1MM in research, it was unprecedented and quite remarkable as 17 leading online panel companies cooperated with each other and with large buyers of marketing research in a collaborative and transparent way.  We studied 700,000 panelists and over 100,000 completed survey responses from 17 panels in a rigorously designed research project.

Competitors and trading partners were drawn together at the ARF by a sense of urgency to reestablish the trustworthiness of online research. ..by our shared future.

Let me discuss findings against four of the big issues that were on the table.

Some high profile examples have been reported of study results not replicating.  Why is that?

FoQ data prove that, for each of the 17 panels, results replicate within panel (within the limits of sampling variation) but do not necessarily replicate across panels.  This means that buyers will need to be cautious about switching suppliers when data comparability to other study results is a main consideration.  Furthermore, because suppliers often draw on sample sources beyond their own panels, buyers must engage in conversations with suppliers about any change in sample source.

We found two reasons worth noting for lack of comparability but not powerful enough to fully explain differences across panels:

  • Panelist longevity: newer recruits are more likely to give positive purchase intent towards concepts.  The longevity profile differed dramatically across panels.
  • Those motivated by monetary rewards to take a survey appear to be more positive to concepts and some panels stress such rewards while others do not.

Some feared that there is a small group of “professional respondents” on everyone’s panel, doing it for the money, and gaming the system rather than providing thoughtful answers

FoQ proves that this is not true. Overlap is less than thought. Over 80% of e-mail addresses appear on only one panel and the collective pool of unique e-mails in the US is estimated at something over 5.5 million.  For historical comparison, this is probably 2-4 times greater than the pool of mail panelists that NFO, HTI, and Market Facts collectively had in the 80s-90s.  Also, mostly people are motivated to join online research panels by a desire to share their opinions, rather than for the incentives.  Perhaps the most telltale finding is that those who take MORE surveys per month (up to 10) provide MORE thoughtful answers (i.e. less likely to straightline their answers or fail trap questions.)  In other words, if anything, being “professional” is a GOOD thing as it relates to respondent engagement with the activity of survey-taking.

In fact, as it turns out, the biggest cause of not providing thoughtful answers is long surveys!

Are people taking the same survey more than once?

On studies where qualification criteria are not very restrictive, “duplication” is mostly a single digit issue (depends on the panel or pair of panels) but if the qualification criteria are more specialized, it can more prevalent and damaging. The industry must adopt practices that will address this.

Are things going to get better in the near future?

Yes! We all feel the sense of urgency so The ARF and industry leaders who have contributed their time and expertise will continue; we are committing to a 90 day plan.  With the cooperation of other industry associations, we will provide a program with recommendations regarding metrics and business practices, templates, definitions, and training that will enable buyers and sellers to work together to bring the industry to a better place in terms of data quality, comparability, and the trustworthiness of online research results.

Tags: , , , , , , ,

Comments

10 Responses to “Our shared future regarding online data quality”

  1. Great news! I’ve been trying to convince people for a long time that 1) long surveys lead to poor quality data and 2) heavy responders are not the issue. Glad to see some validation from a trusted source! (I can just hear all the dittos from other researchers!)

  2. Joel Rubinson

    would love to hear from as many as possible. Also, I want to tell you how amazing Efrain has been to work with!

  3. I just returned from the MRA conference in Chicago, and there’s definitely a groundswell in opinion recognising that it’s often our own practices that lead to bad survey data: long surveys, poor design, grids, disqualifications etc. rather than what is a relatively small group of gamers in the system. I think this research really helps in evidencing this.

    The hard part will be the researcher / client education – there’s always another panel out there willing to deploy a bad survey so it’s hard for panel companies to turn away business. Having more research like this which can be used to persuade clients to follow best practice – as it’s in their interests to get good data.

  4. Great post. A long time has past since Kish, Deming, Cochran, etc. provided classic texts on random sampling theory. It seems that great progress has been made re: internal reliability going on within panels, which is on going (and possibly methods like pattern recognition, etc. can be used for tests of validity). Great job.

    On the topic of internal v. external validity, you mention that “FoQ data prove that, for each of the 17 panels, results replicate within panel (within the limits of sampling variation) but do not necessarily replicate across panels.” I am wondering, from a marketers point of view, do you think I would be a good idea to use many panels and look for a “convergence” in the results; more like combined panel results becoming a latent class given a combination of all the various practices use (incentives, representation, churn, etc.)? Kind of makes me thing of prediction markets in some sense…and the “wisdom of the crowds”.

  5. Joel Rubinson

    I think there are three broad strategies that are each worth considering. one is to use a single panel source, include longevity in the sample pull and post-stratification, have that documented, and insist that all studies use that panel and those weighting variables. The second strategy is to use a broad blend of panels and insist that the mixture is identical from study to study. Again, sample selection and weighting variables must be fixed. The third strategy is TBD, as we are continuing to analyze the data to find factors that might make some panels more similar to each based on knowing certain characteristics relating to aspects of panel mgmt.

  6. […] 17Jun09 If you’ve ever had to address a client’s concerns about the quality of online panels, then stop reading this post and click over to the Advertising Research Foundation and Joel Rubinson’s blog. […]