An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
The purpose of this study is to determine how accurately interviewers ask questions as well as how well respondents answer them. This will identify problematic question wording and guide future interviewer training.
The operation being tested is the Nonresponse Followup interview that occurs when a census form is not obtained from a household during the decennial census. The 2010 Census Nonresponse Followup was interviewer-administered, asked for the same information as the mailout/mailback census form, was conducted using pencil and paper, and each interview lasted approximately ten minutes.
Behavior coding is used to test the interviewer and respondent interaction while conducting the Nonresponse Followup interview. Behavior coding, as a method, systematically describes interactions between interviewer and respondent through the application of a set of uniform codes that make reference to the behaviors that take place during an interaction. There are codes for the ideal question-and-response situation where the question is read as worded and the response easily fits into response categories. However, other codes exist for when the interaction is less than ideal. Deviations might indicate potentially problematic questions and reduced data quality.
The primary research question for this study is: How well do survey questions perform in interviews? We examine this issue using data that consists of 204 audio-taped Nonresponse Followup interviews. We acknowledge that audio tapes leave out non-verbal communications that occur in face-to-face interviews and that this sample is of convenience and not statistically random. Six interviewers who speak both English and Spanish fluently were trained in behavior coding. They each coded approximately 40 interviews and both the first interaction between respondent and interviewer as well as the final outcome were coded in this study. Additionally, all coders coded the same seven cases (five in English, two in Spanish) to test for reliability, that is, when presented with the same interview, how often do the behavior coders independently apply the same codes? Using Fliess’ kappa statistic, we find moderate agreement between behavior coders, with the exception of the coding of Spanish respondents, which is lower. That coding is less reliable in Spanish-language versions of surveys has been demonstrated in previous studies.
Share
Related Information
Some content on this site is available in several different electronic formats. Some of the files may require a plug-in or additional software to view.
Top