U.S. flag

An official website of the United States government

Skip Header


Measuring Quality in a Census, Part 3

Written by:

An earlier post, “Quality in a Census Part 2,” noted that we have two basic tools to evaluate a census – process-oriented indicators and comparisons to other methods of estimating the population size.

This is a post about some of the process-oriented indicators. I’ll talk about what operations and features of the census might be relevant to answering “how good is it?”

A recent initiative of the American Association for Public Opinion Research urges survey organizations to become more transparent about their process indicators; we at the Census Bureau support such transparency as a way to permit more open evaluation of methods.

As we finish the nonresponse follow up stage, we’re starting to get some indicators of how everything is going thus far.

All the indicators are preliminary at this writing and will change somewhat as our final operations are completed. But here’s how things are looking right now:

1. We used a short form only for all the approximately 135 million housing units. We finished the mailout/mailback phase with a 72 percent versus 69 percent participation rate in 2000 (combined short form and long form 2000 rate). The 2000 short form rate was the same as this year, the 72 percent figure I’ve cited in earlier posts.

2. For about 13 million units in areas with 20% or more Spanish-speakers, we send out a bilingual form; our preliminary analysis suggests that it increased the participation rate in those areas by about 2 percentage points over the English-only form.

3. For about 40 million units, disproportionately in hard-to-enumerate areas, we sent out a replacement form a couple of weeks after the first mailed form. It worked to increase the participation rate in these areas. The result was that we have less variation in participation rates in 2010 than in 2000.

4. We used new questions to identify households with dynamic membership and then recontacted them (about 7.5 million in total) to make sure we didn’t miscount them (in 2000 we did check out large households in this manner – about 2.5 million). We don’t yet know how many problems were resolved by this effort.

5. We updated the address list multiple times using different sources. As a result we had fewer “deadwood” listings (we deleted 4 million during our visits vs. 6 million in 2000). We also added fewer cases to the list when we did our field work. (This last point is a more ambiguous result, which could have arisen either because of a better address list or less diligent field work.)

6. We designed a more efficient assignment process in the nonresponse followup stage, so the miles driven per interview is less than in 2000; we are under budget on the nonresponse followup stage.

7. Despite this, reaching the nonresponse followup cases and getting their cooperation was harder this time; after failing in six tries to contact and interview units, we had to get counts of residents using informed neighbors and building managers relatively more frequently (currently about 5 percentage points more such reporting in 2010 versus 2000).

8. The percentage of occupied units that yielded counts of persons, one way or another, may be very slightly lower this year (about 98.0% in 2010 vs. 99.5% in 2000). We think both this finding and 7. above mirror the lower participation rates in surveys more broadly.

9. We’ve implemented a reinterview process whereby a portion of essentially every enumerator’s work was redone and checked against their original results. (In 2000 only about 75% of the enumerators’ work was subject to the reinterview.)

10. We found a smaller proportion of the enumerators failing to meet our quality standards than we did in 2000. (This, too, has multiple interpretations; we used much more consistent computer-assisted rules for determining violations than was done in 2000.)

11. We found many more vacant units when we went out for Nonresponse Followup than was true in 2000 (about 14.3 million vs. 9.9 million in 2000); that makes sense, given the widely publicized foreclosure rates. However, we need to know the April 1 residency status of units now vacant, so they pose challenges to us in our nonresponse followup.

12. Finally, in my professional experience with large data collection activities, problems during the data collection phase lead to missed deadlines and overruns of the budget. For this Census every operation since the Fall of 2009 has been on schedule and cumulatively we’re significantly under budget.

As these indicators are revealing themselves, some look better than the experience in 2000; some, not, as you can see above.

We’ll gradually be refining these results as our final quality assurance operations (the Vacant/Delete check, and Field Verification) take place. I’ll report them when we have them, especially those that show any changes from our initial insights.

Please submit any questions pertaining to this post to ask.census.gov

Page Last Revised - October 8, 2021
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
NO THANKS
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?

Top

Back to Header