U.S. flag

An official website of the United States government

Skip Header


Survey Research Today and Tomorrow

Written by:

The U.S. Census Bureau conducts many surveys that involve probability samples of  households and businesses using frames such as the Master Address File and the Business Register. The theoretical and practical bases for this approach to survey data collection were largely pioneered at the Census Bureau, in large part because of the work of Morris Hansen and his colleagues. With the difficulty of recruiting respondents and the rapidly rising costs of data collection, this approach now faces increasing challenges. Lower response rates threaten to bias our estimates. Rising costs threaten our ability to do surveys.

In the broader community of survey and public opinion research, these same challenges are even more deeply felt. While the response rates to federal surveys are declining, they are plummeting in nongovernmental surveys, which have been conducted mainly by telephone for some decades. It is not unusual now to see response rates for nongovernmental random digit dial (RDD) telephone surveys in the single digits. The rapid growth of wireless telephone service has added markedly to the cost of telephone studies. It is now harder to identify numbers that connect to households and notably more complicated and expensive to complete interviews over cell phones.

Such factors have led to significant growth of online surveys in the commercial and academic research sectors. Using monetary or other incentives, organizations recruit people to fill out questionnaires online. Individuals may commit to do a single questionnaire or they may join a panel of respondents who agree to complete surveys periodically over a few months or years. Surveys completed in this way are inexpensive and fast. However, the problem with this method is that for many online surveys, respondents are not sampled using a probability approach but are rather volunteering to participate. This begs the following questions: Are these volunteers representative of the larger population, like people who are sampled for probability surveys? If not, can the data they provide be adjusted to create unbiased estimates of the population?

A session at the American Association for Public Opinion Research conference in May will feature a discussion of two papers that explore this controversy. The papers will be published later this year in a special issue of Public Opinion Quarterly. One paper, “Apples to Oranges or Gala versus Golden Delicious? Comparing Data Quality of Non-Probability Internet Samples to Low Response Rate Probability Samples” by David Dutwin and Trent Buskirk, compares population estimates from probability-based telephone surveys with low response rates to nonprobability internet surveys. The other paper, “Theory and Practice in Non-Probability Surveys: Parallels Between Causal Inference and Survey Inference” by Andrew Mercer, Frauke Kreuter, Scott Keeter and Elizabeth Stuart, discusses conditions under which nonprobability surveys can be expected to provide estimates free of “volunteer bias.” Each paper will receive critiques from the audience and many of the comments will be published in the journal alongside the papers. This session follows a longstanding AAPOR tradition to publicize and discuss urgent issues in the field of public opinion and survey research.

This article was filed under:

 
Page Last Revised - October 8, 2021
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
NO THANKS
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?

Top

Back to Header