Marketing and Research Consulting for a Brave New World
Subscribe via RSS

The million dollar ARF Foundations of Quality study conducted in Oct/Nov 2008 involved over 100,000 interviews across 17 different online research panel providers, 1,000 RDD telephone interviews and 1,500 interviews conducted via mail panels.

As the ARF continues to release results, we want to share more insight on that important question here.

As can be seen in the two tables below, there is no clear pattern of RDD providing the more accurate answer vs. the average result from internet panel research on a series of benchmarking questions and demographics. The biggest failures of RDD are in age of respondents and cell phone usage probably due to an increasing percentage (20% currently) of people in the US only having mobile phones and not having landlines.

Foundations of Quality (Oct–Nov ‘08)
Raw
(Un-projected)
Variable Us CensusTargets Online (17 panels) Mail Phone
Gender        
     Male 49% 48% 44% 36%
     Female 51% 52% 56% 64%
Education        
     HS or less 47% 24% 27% 29%
     Some College 27% 43% 35% 35%
     4 year College + 26% 33% 34% 36%
     No response   4%
Income        
     Under $50K 43% 50% 35% 40%
     $50K – 99K 33% 33% 32% 24%
     $100K 24% 12% 18% 14%
     Prefer not to answer   5% 12% 14%
     No response   3% 8%
Race        
     White 84% 87% 81% 85%
     Non-White 16% 12% 17% 10%
     Prefer not to answer     1% 2% 5%
Marital Status NA      
     Married   57% 70% 58%
     Not Married   43% 28% 41%
     No Response   2% 1%
Age (mean) 45 45.7 46.6 54.1
     18-29 22% 18% 12% 9%
     30-39 18% 20% 20% 11%
     40-49 20% 20% 20% 21%
     50-64 24% 26% 26% 32%
     65+ 16% 16% 22% 27%
Foundations of Quality (Oct–Nov ’08)
“Best Practice” Weighting
Variable Available Benchmark Online (17 panels) Mail Phone
Own Home 68% 68% 76% 79%
Census/AFF      
Ever Smoked 41% 51% 43% 46%
CDC/NHIS      
Now Smoke 21% 26% 21% 21%
CDC/NHIS      
Residential Phone 80% 84% 84% 100%
CDC/NHIS      
Own Cell Phone 79% 89% 88% 79%
CDC/NHIS      
% Calls Received on Cell Phone 41% 39% 39% 26%
CDC/NHIS      

While a research supplier can weight data (also called “post-stratification”), the demographic imbalances from RDD are problematic. The more skewed the sample is, the more extreme the weights, the lower the “RIM efficiency”. This is an important statistic as it tells you the effective sample size. For example, an ending sample of 1,000 and a RIM efficiency statistic of 40% implies that your sample has the variability of a sample of 400.

Clearly all modes have their challenges and require their own custom-tailored best practices to produce trustworthy information. Even in the world of political polling, RDD interviewing methods can produce very different results and might not be as accurate as well-designed online research. Professor Costas Panagopoulos from Fordham’s Political Science Dept analyzed the accuracy of 23 presidential polls regarding the 2008 presidential election. While nearly all polls use RDD telephone methods, the one poll I recognize as an online poll actually finished third in accuracy (ahead of 20 or so RDD-based polls). Also, 17 of 23 polls OVERESTIMATED Obama’s margin of victory, indicating a clear bias from RDD polling. Interestingly, some of the best known polls (e.g. from the original national networks and Gallup finished at the bottom of the list.

Recently an old 2004 study that also compares online to RDD research managed out of Stanford University has resurfaced with some re-analysis of the 5 year old dataset. Due to the rapid rise of cell-phone only individuals and the apparent failure to employ what would be considered best practice today regarding pre-stratification of online samples, this study is of limited relevance today. (See pollster.com.)

Our evidence from Foundations of Quality is that online research can produce comparable, consistent, and accurate data if proper practices are implemented. If not, for example, if we do not control for sample source, a project can yield data that are not useful. On Sept 29th, the ARF will share a process we have co-created with industry leadership that will allow buyer and seller to work collaboratively to produce online research that can be used with confidence to inform marketing decision making. This meeting is open to all, including the press.

Tags: , , ,

Comments

10 Responses to “How do online and RDD phone research compare? Latest findings…”

  1. Hi Joel:

    Thanks for posting – we are asked about these things all the time.

    Professor Costas Panagopoulos might be interested to see the results of our projection of the popular vote total in the 2008 election using out Online Promoter Score tool. We successfully predicted the outcome:

    http://tinyurl.com/6q7zoz

    And were very close to the popular vote total.

    Thanks for sharing – TO’B

  2. Nice article Joel, have you seen any comparisons between FDD, Internet panels and CGM analysis? That would be interesting.

  3. […] of their Foundations of Quality initiative with the research community. The latest comes from Joel Rubinson’s blog, and compares results from online, RDD and mail surveys to national […]

  4. Joel, several questions:

    1. Can you provide the standard deviation for each variable for the 17 opt-in online surveys? Along the same lines, it would be useful to see the range of estimates for each marginal across the 17 surveys. (Would be interesting to see comparable variability across a similar number of RDD surveys, but understood that’s beyond the scope of the ARF project.)

    2. Was the RDD landline-only? This is implied in your commentary. I would not consider that a best practice for general population studies anymore because of non-coverage of cell-only and cell-mainly households.

    3. What else can you disclose about the RDD survey — who conducted it and in particular the within-household selection method? A skew toward females in the unweighted data is not surprising but in my experience the magnitude reported here is a bit extreme.

    3a. When ARF has completed the analysis will you be disclosing all the underlying information about the study? Full methodology with complete questionnaires (including intros), panel recruitment procedures, RDD callback protocols, response rates, etc etc etc?

    Thanks in advance.

    Mike

    • thanks for your thoughtful comment. I can answer a few of your questions off the top of my head. a cell-phone only sub-sample was NOT part of the project, although this was heavily discussed. In marketing research, it is still often the case that people are called on landlines. Our data suggests that the added cost of cell-phone only sub-samples are needed expenditures for complete accuracy but that might or might not fix the problem–we just don’t know yet. Secondly, I can tell you that we conducted two waves of research with each panel udsing identical protocols and within panel variance between the waves was virtually nil, as would be expected given the huge sample sizes. We DID find variance across panels, which is a focus of the quality enhancement process the ARf will introduce on Sept 29th. These findings have already been made public and I have blogged them. I’ll look into some of the other questions as I get a chance. Regards, Joel

  5. Very interesting, Joel. I’m curious, though, as such a small subset of the online population makes up the percentage of people that actually opt-in to be part of online panels (not to mention the number of people who belong to numerous panels), how accurately can you compare 1000 RDD interviews that were devoid of the appopriate calling strategy to include cellphone-only respondents, who now represent about 20% of the HH’s in the United States, to an incredible oversample of a much smaller population?

    It would be like surveying everyone who lived in Rhode Island and comparing it to 100 surveys from Texas – not exactly apples-to-apples.

    Also, I didn’t see any mention of what kind of birthday-rule or other randomized respondent selection tool within the households was used, which of course would normally counter the huge discrepancy of age and gender that appeared to exist in the phone sample that was collected (although the lack of younger age ranges could partly be attributed to the lack of a cell-phone sample, as a much greater percentage of 18-39 year olds are cell-phone only). Was one used at all?

    However, more than any of this, I am extremely concerned that a non-profit industry organization is taking on the role of researcher, and seemingly endorsing one method of data collection over another as being more valid, and what I find even more descerning is that you appear to be claiming validity for a mode that while useful, is more heavily criticized for claiming to be something it isn’t by the entire rest of the research industry. It’s taken a while for even the people who sell the online panels to finally admit more of what the panel should and shouldn’t be used for, although many of the buyers of research have already figured this out, simply by being burned numerous times (fool me once, shame on you – fool me twice . . .).

    Please understand that we as a company offer online and telephone interviewing to our clients, and believe each has it’s place – it’s up to the researchers and the data collectors to help educate the rest of the community as to what works best to satisfy the needs of the specific research question at hand. So when trying to help educate as well as battle in a price war that simply can’t be won, I personally find it very upsetting to have the ARF endorse such a study with such skewed sample sizes across methodologies claiming they can report anything that even remotely reflects a “finding.”

    • By the number of misconceptions you have about this research and the ARF, I’m guessing you have not been involved in this until now.

      So you know, we are not a supplier, we are an industry association that collaboratively managed a research project that had 20 or so organizations involved in co-managing. We do not “favor” one method over the other. We factually inform the industry and let the chips fall where they may. we are on the side of truth. the analysis was conducted by an independent analyst. RDD is fine and appropriate in certain cases. Our project shows, BTW, that somewhere beyond 5.5 million people are actively engaged in online research panels, with less than 20% belonging to multiple panels, so we are happy to dispel mythology that it is a small number who take many surveys. On the other hand, it is questionable as to how many are reachable by phone. A 1999 JMR paper estimated that 3-4% account for 50% of phone surveys.

  6. I have to agree with much if not all of Lance’s response. Though I’ve moved away from the data collection side of our business, I’m very in tune with that area and the challenges many methodologies face. I’ve been a proponent of online research for years, but not because I thought it was superior to telephone. I believe they both have a place and a valid use and deserve consideration based off of the project needs.

    To Lance’s comment, where my biggest concern lies is when a non-profit industry organization takes a stance supporting one form of interviewing over another. As a group supported by advertisers and other such member support, I’m not sure it’s the right direction. I supported the quality initiatives, which are designed to improve and protect the online research business by setting standards everyone should be following. I can’t, however, support policing or providing information supporting one direction over another. Keep in mind, there are a ton of “Research on Research” studies that have shown very different results than yours and some that have shown similar.

    I guess I have to say I’m disappointed to see that ARF is dabbling in this type of research as a means of promotion. I wouldn’t be surprised to see some advertising support impacted. No matter what type of research a firm does, I’d expect they might have concern over how the organizations they support might be impacting their business (positive or negative, it should cause concern).

  7. dave jenkins

    Joel, would like to touch base… no longer at NPD.

    Dave

    cell – 847-903-5744

  8. Great study thank you for the information! It is always good to have this type of demographic comparison. One thing I would caution is that not all differences are demographic.

    Over that past 5 years I have run projects that collected more than 5 million customer satisfaction surveys using web and IVR (phone) tools. We always found that even when we controlled for differences in the demographic characteristic between the two populations we always found large and significant differences in the attitudinal questions.

    For example, the percent of people who were very satisfied could be 5 to 15 points higher for respondents us used IVR compared to web. This finding persisted across industry and time. There is just something different in the experience of taking a survey on line vs. with an IVR system. With attitudinal data it is hard to say which is more valid both seemed to be equally reliable.