The million dollar ARF Foundations of Quality study conducted in Oct/Nov 2008 involved over 100,000 interviews across 17 different online research panel providers, 1,000 RDD telephone interviews and 1,500 interviews conducted via mail panels.
As the ARF continues to release results, we want to share more insight on that important question here.
As can be seen in the two tables below, there is no clear pattern of RDD providing the more accurate answer vs. the average result from internet panel research on a series of benchmarking questions and demographics. The biggest failures of RDD are in age of respondents and cell phone usage probably due to an increasing percentage (20% currently) of people in the US only having mobile phones and not having landlines.
|Foundations of Quality (Oct–Nov ‘08)
|Variable||Us CensusTargets||Online (17 panels)||Phone|
|HS or less||47%||24%||27%||29%|
|4 year College +||26%||33%||34%||36%|
|$50K – 99K||33%||33%||32%||24%|
|Prefer not to answer||5%||12%||14%|
|Prefer not to answer||1%||2%||5%|
|Foundations of Quality (Oct–Nov ’08)
“Best Practice” Weighting
|Variable||Available Benchmark||Online (17 panels)||Phone|
|Own Cell Phone||79%||89%||88%||79%|
|% Calls Received on Cell Phone||41%||39%||39%||26%|
While a research supplier can weight data (also called “post-stratification”), the demographic imbalances from RDD are problematic. The more skewed the sample is, the more extreme the weights, the lower the “RIM efficiency”. This is an important statistic as it tells you the effective sample size. For example, an ending sample of 1,000 and a RIM efficiency statistic of 40% implies that your sample has the variability of a sample of 400.
Clearly all modes have their challenges and require their own custom-tailored best practices to produce trustworthy information. Even in the world of political polling, RDD interviewing methods can produce very different results and might not be as accurate as well-designed online research. Professor Costas Panagopoulos from Fordham’s Political Science Dept analyzed the accuracy of 23 presidential polls regarding the 2008 presidential election. While nearly all polls use RDD telephone methods, the one poll I recognize as an online poll actually finished third in accuracy (ahead of 20 or so RDD-based polls). Also, 17 of 23 polls OVERESTIMATED Obama’s margin of victory, indicating a clear bias from RDD polling. Interestingly, some of the best known polls (e.g. from the original national networks and Gallup finished at the bottom of the list.
Recently an old 2004 study that also compares online to RDD research managed out of Stanford University has resurfaced with some re-analysis of the 5 year old dataset. Due to the rapid rise of cell-phone only individuals and the apparent failure to employ what would be considered best practice today regarding pre-stratification of online samples, this study is of limited relevance today. (See pollster.com.)
Our evidence from Foundations of Quality is that online research can produce comparable, consistent, and accurate data if proper practices are implemented. If not, for example, if we do not control for sample source, a project can yield data that are not useful. On Sept 29th, the ARF will share a process we have co-created with industry leadership that will allow buyer and seller to work collaboratively to produce online research that can be used with confidence to inform marketing decision making. This meeting is open to all, including the press.