U.S. flag

An official website of the United States government

Skip Header


Methodology

Introduction

The U.S. Census Bureau conducted the Management and Organizational Practices Survey-Hospitals (MOPS-HP) to provide subnational estimates on the use of structured management practices in hospitals. The MOPS-HP surveyed approximately 3,200 hospital establishments classified as General Medical and Surgical Hospitals, according to the North American Industry Classification System (NAICS). The MOPS-HP is a supplement to the 2019 Service Annual Survey (SAS). For information on the SAS, see <https://www.census.gov/programs-surveys/sas.html>.

The SAS and MOPS-HP, while connected in terms of coverage, have different goals. SAS is a company-based survey which selected its sample in 2016 and aims to produce national revenue and expense estimates for service industries across 12 NAICS sectors. In contrast, the MOPS-HP is an establishment-based survey with the goal of producing subnational estimates of management practice indices for NAICS 6221 specifically. The MOPS-HP was partially funded and developed jointly with the Harvard Business School. 

Survey Design

Target population: The target population for the MOPS-HP consists of all U.S. establishments with paid employees, classified in NAICS 6221: General Medical and Surgical Hospitals. The target population for the 2019 SAS consists of single-unit and multi-unit firms with paid employees. An establishment is a single physical location where business transactions take place and for which payroll and employment records are kept. Groups of one or more establishments under common ownership or control are firms. A single-unit firm owns or operates only one establishment. A multi-unit firm owns or operates two or more establishments.

Nonemployer firms are out-of-scope to the SAS and MOPS-HP.

Sampling frame: To create the sampling frame, we extract the records for all establishments located in the United States and classified in NAICS 6221 from the Census Bureau’s Business Register, as updated to August 2020. The Business Register is a multi-relational database that contains a record for each known establishment that is located in the United States or one of its territories and has paid employees.

For each of the extracted establishments, we extract revenue, payroll, employment, name and address information, as well as primary identifiers and other classification and identification information. Additional information is also extracted from data collected as part of the 2017 Economic Census.

Extracted establishments are also matched to the firms in the 2019 SAS, to extract additional survey information used in sampling for the MOPS-HP.

Sampling unit: The sampling units for the MOPS-HP are establishments.

Sample design: The MOPS-HP sample consists of all establishments affiliated with any sampled company/firm included in the 2019 SAS and classified in NAICS 6221. Conversely, any establishment in NAICS 6221 belonging to a company that was not in the 2019 SAS sample, was not eligible for sample for the MOPS-HP. 

The MOPS-HP sample design uses a stratified, one-stage design. To meet requirements for producing subnational estimates, primary strata are defined by Federal income tax filing classification, ownership status (i.e., private, public, or government), and Census region. Establishments with all or part of their income exempt from Federal income tax under provisions of section 501 of the Internal Revenue Service (IRS) code are classified as tax-exempt; establishments indicating no such exemption are classified as taxable. All government-operated hospitals are classified as tax-exempt. Sub-strata bounds are determined based on revenue-based measures of size.

The establishments on the sampling frame are assigned to each sub-strata based on their 2018 revenue-based measure of size. All establishments associated with companies in the 2019 SAS are then ”selected” into the sample. The inclusion rate/ ‘probability of selection’ for selected establishments within each sub-stratum is calculated as the ratio of the number of selected establishments to the number of all in-scope establishments.  A MOPS-HP weight for each selected establishment is calculated as the inverse of its inclusion rate.

Frequency of sample redesign: The MOPS-HP is a one-time survey; therefore, no sample redesign will occur.

Sample maintenance: Since the MOPS-HP is a one-time survey, no sample maintenance will be performed. 

Data Collection

Data items requested and reference period covered: The MOPS-HP questionnaire is comprised of 39 questions. The form was directed to the Chief Nursing Officer (CNO) of each establishment and requested information regarding tenure, organizational characteristics, management practices, management training, management of team interactions, staffing and allocation of human resources, standard clinical protocols, and documentation of patients’ medical records. For most questions, respondents were asked to report their response for 2020, as well as a response based on recall for 2019.

The survey questionnaire can be found at <www.census.gov/programs-surveys/mops-hp/technical-documentation/questionnaires.html> along with the corresponding instructions and letters.

Key data items: To be considered a respondent in the MOPS-HP, an establishment had to respond to questions 8, 14, 16, 17, 19, 20, and 21 of the questionnaire.

Type of request: The MOPS-HP is a mandatory survey.

Frequency and mode of contact: In an effort to promote electronic reporting, establishments were instructed to only provide data electronically. An initial letter was sent explaining the necessity and use of the data and provided instructions for online reporting and access to the survey worksheet.

A due date reminder letter was mailed approximately two weeks before the survey was due, and a due date reminder email was sent approximately one week before the survey was due. Mail, email, and telephone calls were also utilized to follow up with establishments that failed to respond.

Compilation of Data

Editing: Reported data was not changed by edits. However, for questions that were skipped due to skip patterns, the skipped questions are assigned a zero for computing the establishment’s management score.

For questions where more than one response was selected, the most structured management practice reported is assigned as the response for computing the establishment’s management score. These edits are only used for calculating management scores but are not used in tabulating the response distribution for each question.

For questions where respondents were asked to ‘select all that apply’ from a list of staff positions, responses are allocated to three broader categories of workers: Senior Managers, Middle Managers, and Nonmanagers. This allocation is used in producing both management scores and response distributions.

Nonresponse: Nonresponse is defined as the inability to obtain requested data from an eligible survey unit. Two types of nonresponse are often distinguished. Unit nonresponse is the inability to obtain any of the substantive measurements about a unit. In most cases of unit nonresponse, the Census Bureau was unable to obtain any information from the survey unit after several attempts to elicit a response. Item nonresponse occurs either when a question is unanswered or unusable.

Nonresponse adjustment and imputation: To account for unit nonresponse, an adjustment factor is applied to the initial sample weight of each MOPS-HP respondent. To compute the values of the unit nonresponse adjustment factors, each sampled establishment is grouped by ownership status, tax status, and Census region into an adjustment cell. For a given adjustment cell, the unit nonresponse adjustment factor is computed as the ratio of:

·       the sum of the sample weights of the sampled establishments belonging to the adjustment cell, to

·       the sum of the sample weights of the sampled establishments which satisfy the response criteria within the adjustment cell.

The resulting factor is then used, in estimation, to adjust the sampling weight for each respondent in the given adjustment cell to create a nonresponse-adjusted weight.

For item nonresponse, the only imputation performed occurs for items where the skip pattern of the form precludes the respondent from answering certain questions. For these skipped items, the value imputed is the least structured practice.  For example, if an establishment reported that standardized clinical protocols were not usually modified or updated in question 35 (generating a skip to question 37), then the imputed response for question 36 would be zero; that is, the establishment did not have a time period for which the hospital typically modified or updated its standardized clinical protocols. Estimation:  Six management index scores are constructed for official tabulation:

1.      the Hospital Management Index (HMI) (questions 4-8, 10, 14-21, 23-37),

2.      the Clinical Management Index (questions 4-8, 10, 14-21),

3.      the Team Interactions Index (questions 23-28),

4.      the Staffing Index (questions 29-32),

5.      the Protocols Index (questions 33-37),

6.      and the Manufacturing-Comparable Index (questions 4-21).

Each of the six management index scores are estimated at each tabulation level using multiple steps.

First, for a given index, each response choice for each question used in the construction of the index is assigned a monotonic value ranging from 0 to 1, where 1 is the most structured management practice and 0 is the least structured management practice.

Then, an index score is created for each responding establishment in the survey. This score is determined by calculating the average of the non-missing monotonic values assigned to each of the establishment’s responses for the related questions. For questions where more than one choice can be selected, the monotonic value of the most structured practice selected is used in calculating the management score.  For questions that were skipped due to skip patterns, the skipped questions are assigned numerical values of 0. For example, if a respondent selects “No hospital-wide patient care goals” in Question 8 and skips to Question 11, Questions 9 and 10 are scored as zero and included when calculating the average.

Next, for each responding establishment, a weighted score is computed for each index by multiplying each index score by the establishment’s final weight. An establishment’s final weight is determined by multiplying its nonresponse adjustment factor by its sample weight to create a final nonresponse-adjusted weight to be used in tabulation.

Finally, management index scores for each index are then computed at each tabulation level by calculating the average of the weighted scores for each responding establishment classified in the given tabulation level.

In addition to the six management indices, a share of responses by question response distribution is calculated for each response choice of the questions within the HMI. All share of responses by question estimates are tabulated as the percentage of the weighted number of establishments that selected a response choice for a question out of the total weighted number of establishments who answered the question, at the survey level.

Sampling Error: The sampling error of an estimate based on a sample survey is the difference between the estimate and the result that would be obtained from a complete census conducted under the same survey conditions. This error occurs because characteristics differ among sampling units in the population and only a subset of the population is measured in a sample survey. The particular sample used in this survey is one of a large number of samples of the same size that could have been selected using the same sample design. Because each unit in the sampling frame had a known probability of being selected into the sample, it was possible to estimate the sampling variability of the survey estimates.

Common measures of the variability among these estimates are the sampling variance, the standard error, and the coefficient of variation (CV), which is also referred to as the relative standard error (RSE). The sampling variance is defined as the squared difference, averaged over all possible samples of the same size and design, between the estimator and its average value. The standard error is the square root of the sampling variance. The CV expresses the standard error as a percentage of the estimate to which it refers. For example, an estimate of 200 units that has an estimated standard error of 10 units has an estimated CV of 5 percent. The sampling variance, standard error, and CV of an estimate can be estimated from the selected sample because the sample was selected using probability sampling. Note that measures of sampling variability, such as the standard error and CV, are estimated from the sample and are also subject to sampling variability. It is also important to note that the standard error and CV only measure sampling variability. They do not measure any systematic biases in the estimates.

The Census Bureau recommends that individuals using these estimates incorporate sampling error information into their analyses, as this could affect the conclusions drawn from the estimates.

The variance estimates for both the management index scores and the share of response distributions are calculated using a stratified jackknife procedure. Standard errors are published for the MOPS-HP; however, coefficients of variation are not published due to the nature of the published estimates.

Confidence Interval: The sample estimate and an estimate of its standard error allow us to construct interval estimates with prescribed confidence that the interval includes the average result of all possible samples with the same size and design. To illustrate, if all possible samples were surveyed under essentially the same conditions, and an estimate and its standard error were calculated from each sample, then approximately 90 percent of the intervals from 1.645 standard errors below the estimate to 1.645 standard errors above the estimate would include the average estimate derived from all possible samples.

Thus, for a particular sample, one can say with specified confidence that the average of all possible samples is included in the constructed interval. For example, suppose that an estimated structured management score is 0.500 in 2020 and that the standard error of this estimate is 0.005. This means that we are confident, with 90% chance of being correct, that the average estimate from all possible samples of establishments on the frame in 2020 is a management score between 0.492 and 0.508 (0.500 structured management score plus or minus 0.008). This is called a 90-percent confidence interval.  The true population average estimate of structured management scores during 2020 may or may not be contained in any one of these computed intervals; but for a particular sample, one can say that the average estimate from all possible samples is included in the constructed interval with a specified confidence of 90 percent.

It is important to note that the standard error only measures sampling error. It does not measure any systematic nonsampling error in the estimates.

Nonsampling Error: Nonsampling error encompasses all factors other than sampling error that contribute to the total error associated with an estimate. This error may also be present in censuses and other nonsurvey programs. Nonsampling error arises from many sources: inability to obtain information on all units in the sample; response errors; differences in the interpretation of the questions; mismatches between sampling units and reporting units, requested data and data available or accessible in respondents’ records, or with regard to reference periods; mistakes in coding or keying the data obtained; and other errors of collection, response, coverage, and processing.

Although no direct measurement of nonsampling error was obtained, precautionary steps were taken in all phases of the collection, processing, and tabulation of the data in an effort to minimize its influence. Precise estimation of the magnitude of nonsampling errors would require special experiments or access to independent data and, consequently, the magnitudes are often unavailable.

The Census Bureau recommends that individuals using these estimates factor in this information when assessing their analyses of these data, as nonsampling error could affect the conclusions drawn from the estimates.

Unit nonresponse is used to describe the inability to obtain any of the substantive measurements about a sampled unit. The Unit Response Rate (URR) is defined as the percentage of reporting units in the statistical period, based on unweighted counts, that were eligible for data collection or of unknown eligibility that responded to the survey.  The MOPS-HP had a unit response rate (URR) of 54.9% for 2019 and 67.1% for 2020. The URR was calculated by taking the number of respondents (R) and dividing that by the number of establishments eligible for data collection (E) plus the number of establishments for which eligibility could not be determined (U); that is, URR = [R/(E+U)]*100.

Disclosure avoidance: Disclosure is the release of data that reveals information or permits deduction of information about a particular survey unit through the release of either tables or microdata. Disclosure avoidance is the process used to protect each survey unit’s identity and data from disclosure. Using disclosure avoidance procedures, the Census Bureau modifies or removes the characteristics that put information at risk of disclosure. Although it may appear that a table shows information about a specific survey unit, the Census Bureau has taken steps to disguise or suppress a unit’s data that may be “at risk” of disclosure while making sure the results are still useful.

The MOPS-HP uses cell suppression for disclosure avoidance.

Cell suppression is a disclosure avoidance technique that protects the confidentiality of individual survey units by withholding cell values from release and replacing the cell value with a symbol, usually a “D”. If the suppressed cell value were known, it would allow one to estimate an individual survey unit’s response too closely.

The cells that must be protected are called primary suppressions.

To make sure the cell values of the primary suppressions cannot be closely estimated by using other published cell values, additional cells may also be suppressed. These additional suppressed cells are called complementary suppressions.

The process of suppression does not usually change the higher-level totals. Values for cells that are not suppressed remain unchanged. Before the Census Bureau releases data, computer programs and analysts ensure primary and complementary suppressions have been correctly applied.

The Census Bureau has reviewed this data product to ensure appropriate access, use, and disclosure avoidance protection of the confidential source data (Project No. P-7502871, Disclosure Review Board (DRB) approval number:  CBDRB-FY23-0311).

For more information on disclosure avoidance practices, see FCSM Statistical Policy Working Paper 22 at <www.hhs.gov/sites/default/files/spwp22.pdf>.

Page Last Revised - June 12, 2023
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
NO THANKS
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?

Top

Back to Header