The Research and Development Survey (RANDS) during COVID-19
The National Center for Health Statistics’ (NCHS) Research and Development Survey (RANDS) is a series of commercial panel surveys collected for methodological research purposes. In response to the COVID-19 pandemic, NCHS expanded the use of RANDS to rapidly monitor aspects of the public health emergency. The RANDS during COVID-19 survey was designed to include COVID-19 related health outcome and cognitive probe questions. Rounds 1 and 2 were fielded June 9–July 6, 2020 and August 3–20, 2020 using the AmeriSpeak Panel. Existing and new approaches were used to: 1) evaluate question interpretation and performance to improve future COVID-19 data collections and 2) to produce a set of experimental estimates for public release using weights which were calibrated to NCHS’ National Health Interview Survey (NHIS) to adjust for potential bias in the panel. Through the expansion of the RANDS platform and ongoing methodological research, NCHS reported timely information about COVID-19 in the United States and demonstrated the use of recruited panels for reporting national health statistics. This report describes the use of RANDS for reporting on the pandemic and the associated methodological survey design decisions including the adaptation of question evaluation approaches and calibration of panel weights.
The National Center for Health Statistics (NCHS) at the Centers for Disease Control and Prevention is the principal federal health statistics agency in the United States and guides actions and policies through the dissemination of health statistics https://www.cdc.gov/7nchs. In addition to the data divisions which collect and disseminate data through a variety of sources, including interview surveys, NCHS houses the Division of Research and Methodology (DRM). DRM has a diverse methodological research program that supports NCHS’ surveys through questionnaire testing and design, statistical research and survey design, and secure data access. In 2015, in response to the growing interest in alternative survey modes and the maturation of relatively inexpensive, probability-sampled commercial survey panels, DRM established the Research and Development Survey (RANDS) program (https://www.cdc.gov/nchs/rands). RANDS is an ongoing series of surveys using commercially available survey panels, typically referred to as “web panels,” designed to serve as a methodological test bed for DRM.
Although RANDS was designed for methodological research and used solely for this purpose for the first rounds of RANDS (Rounds 1, 2, 3, and 4), the coronavirus disease 2019 (COVID-19) pandemic disrupted data collection across NCHS in early 2020. In order to collect COVID-19 related health information that would be useful to the public and to policymakers as well as provide information about question performance for future COVID-19-related questionnaire development, NCHS adapted the RANDS program to publicly produce selected national and subgroup estimates. This special series of surveys was termed “RANDS during COVID-19” to distinguish it from prior rounds of RANDS and was first fielded through a pair of dual-mode (web and phone) surveys administered by NORC at the University of Chicago (https://www.norc.org) in the summer of 2020. This paper focuses on the use of RANDS during COVID-19 for reporting on the COVID-19 pandemic, although RANDS during COVID-19 is only one of several ways that NCHS reported on the pandemic. Additional approaches and statistics can be found on the NCHS COVID-19 data webpage https://www.cdc.gov/nchs/covid19/46index. htm.
Prior to the COVID-19 pandemic, RANDS has been used for several survey methodology developments, including measurement error and estimation research. Survey measurement error describes the difference between what a questionnaire item is designed to measure and what it actually measures. Question evaluation is the process of identifying the sources of this error. Traditionally, NCHS has used in-person cognitive interviewing as its primary evaluation method; however, beginning with the second round of RANDS, the Center began exploring how it could complement its cognitive interviewing work with survey-based methods, such as web probing and split-questionnaire experiments wherein different versions of an item are administered to random sub-samples . Embedded probing (commonly referred to as “web probing”) is a relatively new question evaluation method that incorporates probing questions to collect information about respondents’ thought processes into a survey questionnaire [2, 3].
RANDS has also been used for estimation research to evaluate methods for producing estimates of national health statistics from panel surveys. DRM has used RANDS to study how panel data can be used to supplement information collected from traditional interview surveys to increase the timeliness of data collection and expand the scope of information collected. Investigations have included the comparison of health outcome estimates from RANDS and NCHS’ household surveys  and the exploration of calibration approaches, including propensity score methods, to adjust RANDS panel weights relative to specified reference datasets [5, 6].
In light of the known limitations of web surveys, including lower sample sizes, coverage bias and higher non-response bias, several methodological decisions were made to adapt RANDS to report national health estimates related to the COVID-19 pandemic. Two adaptations are discussed in this paper, including the use of alternative question evaluation methods and the calibration of the RANDS panel weights. Although RANDS during COVID-19 was developed in a rapid response to the pandemic, the alternative question evaluation methods allowed DRM to quickly develop new questions about the COVID-19 pandemic for the questionnaire while still performing measurement error evaluations to improve question development. For the release of public estimates, the calibration of the RANDS during COVID-19 weights adjusted for potential coverage and non-response bias. This paper describes the RANDS during COVID-19 program and provides additional details about the methodological decisions behind this survey to ensure reliable estimates.
Two rounds of RANDS during COVID-19 were completed in 2020, including Round 1 which was conducted from June 9 to July 6, and Round 2 which was conducted from August 3 to August 20. Data were collected by NORC using the AmeriSpeak Panel, a recruited probability-sampled panel. Responses from the AmeriSpeak Panel for selected outcomes were reported in an online public release https://www.cdc.gov/107nchs/covid19/rands.htm and are used in this report. Each round was conducted via web and telephone interviews, with most respondents responding via web administration (94.0% in Round 1, 92.9% in Round 2). The number of sampled panelists, respondents by web and telephone modes, and response rates by round are reported in Table 1. The cumulative response rates in Rounds 1 and 2 were 23.0% and 20.3%, respectively. The cumulative response rates account for all response stages including recruitment into the AmeriSpeak Panel and into RANDS, as well as the RANDS during COVID-19 survey completion rate. The completion rates for the two rounds were 78.5% and 69.1%, respectively.
|Round||Round 1||Round 2|
|Web||6,390 (94.0%)||5,559 (92.9%)|
|Telephone||410 (6.0%)||22 (7.1%)|
|Cumulative response rate||23.0%||20.3%|
RANDS Research and Development Survey. The completion rate is the percent of sample members who completed the survey interview. The cumulative response rate accounts for all response stages including the panel recruitment rate, panel retention rate, and survey completion rate. It is weighted to account for the sample design and differential inclusion probabilities of sample members.
For both rounds of RANDS during COVID-19, NORC provided panel weights which account for the sample design, including the sampling probability from the AmeriSpeak Panel and the probability of selection into RANDS, with adjustments for non-response. The final panel weights were raked (an iterative method of adjusting sample weights in order to reflect external population totals) to US population counts by age, sex, Census division, race and Hispanic origin, education, housing tenure, and telephone service and trimmed for extreme weights.
The questionnaires included questions related to sociodemographic features, chronic health conditions, mental health, employment and benefits, health insurance, access to care and usual place of care, COVID-19-related health questions, preventive COVID-19 health behaviors, and disruption and access to non-COVID health care. For the question evaluation methods, questions on various topics including telemedicine use and COVID-19 testing were included on the questionnaire. To evaluate the impact of the pandemic on specific health outcomes, three main topics were selected for public release: loss of work due to illness with COVID-19, telemedicine access and use, and reduced access to care. These topics were selected as they were not measured by existing surveys and would provide timely estimates to rapidly changing health outcomes. The RANDS during COVID-19 Round 1 and Round 2 experimental estimates were publicly released online on August 5, 2020 and October 9, 2020, respectively. The outcomes and subgroup variables reported in the public release are described in Table 2. These estimates were termed “experimental estimates” for the public release as RANDS is based on a commercial panel which has a different error profile than those of NCHS’ traditional household surveys, including a smaller sample size and a lower response rate. Related methodological research to facilitate the transition of RANDS for use as a public data system, including the approaches described in this paper, is ongoing. NCHS’ data presentation standards for proportions  are applied to the results reported in this paper.
|Loss of work due to illness||Were you unable to work because you or a family member was sick with the Coronavirus?||Yes No|
|Provider offers telemedicine||In the last two months, has this provider offered you an appointment with a doctor, nurse, or other health professional by video or by phone?||Yes No Don’t know|
|Scheduled one or more telemedicine appointments||In the last two months, have you had an appointment with a doctor, nurse, or other health professional by video or by phone?||Yes No Don’t know|
|Provider offered telemedicine prior to pandemic||Did this provider offer you an appointment with a doctor, nurse, or other health professional by video or by phone before the Coronavirus pandemic?||Yes No Don’t know|
|Reduced access to care for any reason||In the last two months, were you unable to get any of the following types of care for any reason? (Urgent Care for an Accident or Illness, A Surgical Procedure, Diagnostic or Medical Screening Test, Treatment for Ongoing Condition, A Regular Check-up, Prescription Drugs or Medications, Dental Care, Vision Care, Hearing Care)||Yes No|
|Reduced access to care due to the pandemic||For the following, were you unable to get this because of the Coronavirus pandemic? (Urgent Care for an Accident or Illness, A Surgical Procedure, Diagnostic or Medical Screening Test, Treatment for Ongoing Condition, A Regular Check-up, Prescription Drugs or Medications, Dental Care, Vision Care, Hearing Care)||Yes, because of the pandemic No, not because of the pandemic|
|Age group||18–44 years, 45–64 years, 65 years and over|
|Race/Hispanic origin||White non-Hispanic, Black non-Hispanic, Other non-Hispanic, Hispanic|
|Education||High school graduate or less, Some college, Bachelor’s degree or above|
|Chronic conditions||One or more chronic conditions, Diagnosed diabetes, Diagnosed hypertension, Diagnosed current asthma|
RANDS Research and Development Survey. Urbanization is assigned by zip code, where metropolitan includes metropolitan and micropolitan areas and non-metropolitan includes all other designations. One or more chronic conditions is defined as a diagnosis of one or more of the following: hypertension, also called high blood pressure; high cholesterol; coronary heart disease; current asthma; chronic obstructive pulmonary disease (COPD), emphysema, or chronic bronchitis; cancer or a malignancy of any kind; or diabetes excluding pre-diabetes and borderline diabetes.
2.1Alternative question evaluation methods
Typically, NCHS evaluates new questions using in-person, iterative cognitive interviews , a qualitative method that examines how potential survey respondents understand and process survey questions. While NCHS was eventually able to implement virtual cognitive interviewing in 2020, this was not possible prior to RANDS during COVID-19 going into the field. Therefore, web probes were integrated into the RANDS during COVID-19 questionnaire to collect information on question performance.
Embedded probes were used in the RANDS during COVID-19 questionnaires to examine specifically the interpretation of novel questions designed to measure telemedicine access and COVID-19 testing prevalence. Because NCHS had not conducted qualitative evaluations of these questions before RANDS was administered, open-ended probes were used in the first round to collect information about their interpretations. The resulting narratives were analyzed by DRM researchers using the constant comparative method , and findings could then be used to design future closed-ended probes. Furthermore, for the telemedicine access question, DRM researchers systematically coded each response, and the codes were appended to the RANDS during COVID-19 Round 1 datafile for statistical analysis. The prevalence of codes across the sample and by subgroups of interest were evaluated and examined for differences. These groups of interest were chosen a priori based on previous findings regarding salient groups’ question response processes [10, 11], and included education, race and Hispanic origin, age, and sex. Chi-squared tests were conducted using the Survey package in R , and statistically significant associations were identified using a criterion of 0.05. These analyses are presented using unweighted data, as the underlying data are derived from researcher codes rather than from the respondents themselves, and inferences are not intended to be representative of the population.
Since RANDS has limitations compared to traditional probability-sampled surveys, an additional weighting step was performed to calibrate the NORC-provided panel weights to the National Health Interview Survey (NHIS). The NHIS has been conducted by NCHS since 1957 and collects information on a range of health topics primarily using personal household interviews (https://www.cdc.gov/nchs/nhis/). The NHIS is a traditional household interview survey, conducting high-quality data collection and is often used as the gold standard for health estimates in the US. At the time of the release of experimental estimates from RANDS during COVID-19 Rounds 1 and 2, the 2018 NHIS was the most recent round of data publicly available.
Calibration utilizes information from the reference survey, in this case the NHIS, to correct for some of the potential bias in the panel relative to the reference survey. Weights from the NHIS Sample Adult component, which reports responses from one randomly selected adult per family, were used. Calibration variables were selected among covariates available in both RANDS and the NHIS. Testing on prior rounds of RANDS was used to select calibration variables, including variables associated with participation in RANDS and COVID-19 related outcomes. Selected variables included age, sex, race and Hispanic origin, education, income, Census region, marital status, diagnosed high cholesterol, diagnosed asthma (ever been told you had asthma), diagnosed hypertension, and diagnosed diabetes (ever been told you had diabetes, not including prediabetes or borderline diabetes). Calibration of the RANDS weights was performed using raking, which was implemented using the %RAKING macro in SAS . The final RANDS calibrated weights were normalized to sum to the RANDS sample size for the respective round.
|Test of association|
|Variables||Description||Eligible sample size||Percent using out-of-scope interpretation||Standard error||Test statistic ()||Degrees of freedom||-value|
|Age group||18–34 years||1184||11.99||0.83||29.15||3||0.001|
|65 years and over||1739||18.98||0.90|
|Race/hispanic origin||White non-Hispanic||4160||18.61||0.65||32.47||1||0.001|
|Education||High school graduate or less||1142||11.82||1.01||30.14||2||0.001|
|Bachelor’s degree or above||2695||19.04||0.75|
RANDS Research and Development Survey. Probe question wording was “How do you know whether your provider offers telemedicine or not?” NOTES: Results are unweighted and not representational of the United States population. Total number of eligible respondents in RANDS during COVID-19, Round 1 was 11,355. Respondents eligible for the probe include those who indicated that they had one or more usual places of care. “Other” race includes Black non-Hispanic, Other non-Hispanic, and Hispanic. Percents and standard errors are presented for two race and Hispanic origin categories in this table (White non-Hispanic and Other) although race and Hispanic origin were collected using four categories (see Table 2). Statistical tests conducted using the design-adjusted Rao-Scott test via R’s Survey Package and the first-order correction. SOURCE: National Center for Health Statistics, RANDS during COVID-19 Round 1. 2020.
Comparisons between the NORC-provided panel weights and the NHIS-calibrated weights are described in this report, although differences were not evaluated using statistical testing since the two sets of weights are correlated. Comments about the change in an estimate are based on observed differences and statistical testing results between the RANDS during COVID-19 variable estimates using the NORC-provided panel weights and the 2018 NHIS, which was used to specify the variable distribution for raking. While the NORC-provided weights were adjusted for non-response and coverage biases, the raking step to calibrate RANDS to the NHIS considers additional population features including the prevalence of selected chronic conditions. This adjustment allows the RANDS data to be representative of demographic and health features in the adult US population, in addition to the demographic features included in the adjustment of the NORC-provided weights.
|Test of association|
|Variables||Description||Eligible sample size||Percent using out-of-scope interpretation||Standard error||Test statistic ()||Degrees of freedom||-value|
|Age group||18–34 years||285||2.11||0.98||1.99||3||0.59|
|65 years and over||313||1.60||0.69|
|Race/hispanic origin||White non-Hispanic||789||0.76||0.35||6.26||1||0.03|
|Education||High school graduate or less||205||2.44||1.10||3.15||2||0.21|
|Bachelor’s degree or above||592||0.84||0.30|
RANDS Research and Development Survey. Probe question wording was “What kind of Coronavirus test did you receive?” NOTES: Results are unweighted and not representational of the United States population. Total number of eligible respondents in RANDS during COVID-19, Round 2 was 2,378. Respondents eligible for the probe include those who were in the close-ended experimental condition and indicated that they had received a test for COVID-19. “Other” race includes Black non-Hispanic, Other non-Hispanic, and Hispanic. Percents and standard errors are presented for two race and Hispanic origin categories in this table (White non-Hispanic and Other) although race and Hispanic origin were collected using four categories (see Table 2). Statistical tests conducted using the design-adjusted Rao-Scott test via R’s Survey Package and the first-order correction. SOURCE: National Center for Health Statistics, RANDS during COVID-19 Round 2. 2020.
3.1Question evaluation results
The mixed method analysis of the RANDS during COVID-19 question evaluation data showed that question interpretations differed by salient population subgroups. For instance, qualitative analysis of an open-ended web probe following the telemedicine access question (see Table 2) revealed that some respondents interpreted the question as intended (asking about whether or not their health provider had telemedicine capabilities). However, others’ interpretation was out-of-scope from the question’s intent, believing that it asked about whether they had used telemedicine. As shown in Table 3, quantitative analysis of the coded text showed that the percent using an out-of-scope interpretation among those having a usual place of care differed significantly by certain respondent characteristics (education, age, race/Hispanic origin) but was not statistically significant by sex.
In another example, analysis of open text in a probe following a question asking about COVID-19 testing (“Have you ever been tested for Coronavirus or COVID-19?”) in the first round of RANDS during COVID-19 uncovered the fact that some respondents were interpreting the question as asking about procedures such as temperature checks and self-reported health screeners, and thus providing potentially false positive responses. A close-ended probe was administered following this same question in the second round of data collection to determine the extent of this problem. This probe showed that just over 1% of the respondents that had been tested for COVID-19 ( 0.014, standard error 0.004) indicated that they answered the question using one of these out-of-scope interpretations. In this case, respondents with lower levels of education used these out-of-scope interpretations more than those with college degrees, but this difference was not statistically significant (Table 4). However, similar to the interpretation of telemedicine access above, the percent using an out-of-scope interpretation differed across respondent racial and ethnic groups.
3.2Effects of calibrated weights
The weighted distributions of age group, sex, race and Hispanic origin, and Census region remained fairly similar when using the NORC-provided panel weights or the NHIS-calibrated weights as these variables were used to adjust the AmeriSpeak Panel weights and the 2018 NHIS weights (Table 5). Although NORC raked the AmeriSpeak Panel weights on education, the 2018 NHIS did not include education as an adjustment variable and so there were differences in the weighted distributions. The NHIS-calibrated weights suggested a population with lower education (less than a Bachelor’s degree) than the population distribution using the NORC-provided weights.
|Variables||Description||Round 1 ( 6,800)||Round 2 ( 5,981)|
|Sample size||Unweighted (%)||NORC- provided weights (%)||NHIS- calibrated weights (%)||Sample size||Unweighted (%)||NORC- provided weights (%)||NHIS- calibrated weights (%)|
|Age group||18–34 years||1,470||21.62||30.24||29.64||1,208||20.20||30.22||29.64|
|65 years and over||1,806||26.56||21.49||20.61||1,682||28.12||21.49||20.61|
|Education||High school graduate or less||1,296||19.06||38.01||35.95||1,104||18.46||38.01||35.95|
|Bachelor’s degree or above||2,947||43.34||34.26||33.50||2,648||44.27||34.26||33.50|
|Living with partner||503||7.40||8.56||7.57||426||7.12||8.85||7.57|
NORC NORC at the University of Chicago. NHIS National Health Interview Survey. NHIS-calibrated weights were calibrated to the 2018 National Health Interview Survey. RANDS Research and Development Survey. Diagnosed asthma includes those who responded they had ever been told they had asthma. Diagnosed diabetes excludes pre-diabetes and borderline diabetes.
In addition, the calibration step to calibrate the RANDS during COVID-19 weights to the NHIS adjusted for income, marital status, and four chronic conditions. As a result, the NHIS-calibrated weights reflected a population with higher income (e.g., increased percentage of adults with household income greater than $100,000), with a higher percentage of adults that were married, widowed, or separated and a lower percentage of adults that were divorced, never married, or living with a partner compared to the population represented by the NORC-provided panel weights. One new component of the weighting adjustment for RANDS during COVID-19 was the calibration to health conditions, which had not been previously incorporated. Estimates of diagnosed high cholesterol, asthma, and hypertension among US adults from RANDS during COVID-19 using the NHIS-calibrated weights were lower compared to estimates using the NORC-provided weights. However, the 2018 NHIS-calibrated estimate for ever diagnosed diabetes, which excludes gestational diabetes but includes diabetes and borderline diabetes, was higher than the weighted estimate of diagnosed diabetes using the NORC-provided weights. All patterns described among the calibrated weights were consistent for both rounds 1 and 2 of RANDS during COVID-19. Overall, the NORC-provided weights and the final NHIS-calibrated weights were similar. The coefficients of variation for the RANDS during COVID-19 Round 1 NORC-provided weights and the NHIS-calibrated weights were 0.98 and 0.99, respectively, and 1.01 and 1.03, respectively in Round 2. However, the weighting step to calibrate the RANDS during COVID-19 NORC-provided weights to the NHIS additionally adjusted for income, marital status, and health conditions, which controlled for these population totals in the evaluation of the selected COVID-19 outcomes.
RANDS during COVID-19 was developed in rapid response to the need for timely reporting of COVID-19 related health measures. While NCHS’ household surveys were impacted by the COVID-19 pandemic-including the NHIS, which transitioned to a phone-only administration in early 2020 and later contacted households by phone first followed by in-person interviews and the National Health and Nutrition Examination Survey (NHANES) which had to cease operations entirely-NCHS was able to continue collecting health information through the RANDS program when in-person, interviewer administered questionnaires became impossible to implement. This situation highlighted the flexibility of RANDS and its potential to be used as a novel survey approach for reporting national health estimates in a timely manner, particularly as compared to the typical turn-around time of a federal household survey.
However, while NCHS was able to quickly adapt RANDS to produce health estimates during the COVID-19 pandemic, these surveys differ from other NCHS survey systems in several important ways. First and foremost, RANDS is conducted using recruited, commercial survey panels. Commercial survey panels are groups of potential survey respondents maintained by private companies. Panelists are brought into these panels either passively (opt-in or non-probability panels) or actively based on statistical sampling methodology (recruited panels). In the latter case, companies use a random-digit-dial or address-based sampling frame, allowing them to assign probabilities of selection to each panelist. However, compared to traditional NCHS household surveys, even recruited panels generally produce lower quality data due to lower sample sizes, non-response bias, and coverage bias as the sampling frame may not cover the entire population. Given the fact that panels must be continually maintained and refreshed, recruited panels tend to be limited in size. For instance, the NORC AmeriSpeak Panel has approximately 49,000 panel members (https://amerispeak.norc.org/Documents/Research/AmeriSpeakhttps://amerispeak.norc.org/Documents/Research/AmeriSpeak%20Technical%20Overview%202019%2002%2018.pdf) and NORC limits the size and frequency of individual surveys so panelists are not overtaxed. While NCHS’ traditional surveys’ large sample sizes allow for detailed subgroup analysis, subgroup analyses from panel surveys are less reliable with smaller sample sizes. In addition, commercial panel surveys may suffer from coverage bias as the sampling frame may not be representative of the population or may exclude certain populations such as persons without internet access. Some panels attempt to mitigate this potential bias by providing non-internet panelists with an internet-connected device, while other companies (including NORC) use telephone interviews to reach this population, although there is still a risk of coverage bias due to a lack of telephone access.
In order to use RANDS during COVID-19 for estimation while accounting for these limitations, NCHS considered methodological design decisions prior to releasing these experimental estimates including concurrent evaluation of the questionnaire and formulating a set of calibrated weights.
Given the limitations the pandemic placed on traditional question evaluation methods like cognitive interviewing and the need to administer RANDS during COVID-19 quickly, NCHS relied on web probing to evaluate new survey items. Using a combination of open-ended and close-ended probes, a series of mixed method analyses were used to uncover the response processes underlying the experimental estimates and how interpretations differed across subgroups of interest.
Calibrated weights were formulated by raking the NORC-provided RANDS during COVID-19 weights to the 2018 NHIS sample adult weights on selected demographic and health variables. This calibration step adjusted the RANDS during COVID-19 data prior to estimation to ensure that it represented the same population as the NHIS, including differences seen among health outcomes in the two surveys.
This paper lays the framework for the RANDS during COVID-19 series, including specific results from rounds 1 and 2 which were conducted in the summer of 2020. A third round of RANDS during COVID-19 is planned for summer 2021 and will not only provide additional measurements on the published indicators of interest but will also be an opportunity for NCHS to continue evaluating pandemic-related survey items and the best approaches to calibrate commercial survey panel data. This expansion of the RANDS platform demonstrates its capability as a new data source to complement and extend current data collections in the federal statistical system.
Scanlon P. The effects of embedding closed-ended cognitive probes in a web survey on survey response. Field Methods. (2019) ; 31: (4): 328–343. doi: 10.1177/1525822X19871546.
Behr D, Meitinger K, Braun M, Kaczmirek L. Web probing – implementing probing techniques from cognitive interviewing in web surveys with the goal to assess the validity of survey questions. Mannheim, GESIS – Leibniz-Institute for the Social Sciences (GESIS – Survey Guidelines). (2017) . doi: 10.15465/gesis-sg_en_023.
Scanlon P. Using Targeted Embedded Probes to Quantify Cognitive Interviewing Findings. In: Beatty P, Collins D, Kaye L, Padilla JL, Willis G, Wilmot A, eds. Advances in Questionnaire Design, Development, Evaluation and Testing. (2020) . doi: 10.1002/9781119263685.ch17.
He Y, Cai B, Shin H-C, Beresovsky V, Parsons V, Irimata K, et al. The National Center for Health Statistics’ 2015 and 2016 Research and Development Surveys. National Center for Health Statistics. Vital Health Stat. (2020) ; 1: (64). https://www.cdc.gov/nchs/data/series/sr_01/sr01-64-508.pdf.
Parker J, Miller K, He Y, Scanlon P, Cai B, Shin H-C, Parsons V, Irimata K. Overview and initial results of the National Center for Health Statistics’ Research and Development Survey. Statistical Journal of the IAOS. (2020) ; 36: : 1199–1211.
Irimata KE, He Y, Cai B, Shin H-C, Parsons VL, Parker JD. Comparison of Quarterly and Yearly Calibration Data for Propensity Score Adjusted Web Survey Estimates. Survey Methods: Insights from the Field, Special issue “Advancements in Online and Mobile Survey Methods.” (2020) . https://surveyinsights.org/?p=13426.
Parker JD, Talih M, Malec DJ, et al. National Center for Health Statistics Data Presentation Standards for Proportions. National Center for Health Statistics. Vital Health Stat. (2017) ; 2: (175).
Miller K, Willson S, Chepp V, Padilla J-L, (eds). Cognitive Interviewing Methodology. Hoboken, NJ: Wiley and Sons; (2014) . doi: 10.1002/9781118838860.
Glaser B, Strauss A. The Discovery of Grounded Theory: Strategies for Qualitative Research. New Brunswick, NJ: AldineTransaction; (1967) .
Miller K. Conducting cognitive interviews to understand question-response limitations among poorer and less educated respondents. American Journal of Health Behavior. (2003) ; 27: (S3): 264–272. doi: 10.5993/AJHB.27.1.s3.10.
Miller K, Willis G. Cognitive Models of Answering Processes. The SAGE Handbook of Survey Methodology. Thousand Oaks, CA: Sage; (2016) . doi: 10.4135/9781473957893.n15.
Lumley T. Analysis of complex survey samples. Journal of Statistical Software. (2004) ; 9: (8): 1–19. doi: 10.18637/jss.v009.i08.
Izrael D, Hoaglin DC, Battaglia MP. A SAS Macro for Balancing a Weighted Sample. SUGI 25 Statistics and Data Analysis. (2000) . https://support.sas.com/resources/papers/proceedings/proceedings/sugi25/25/st/25p258.pdf https://support.sas.com/resources/papers/proceedings/proceedings/sugi25/25/st/25p258.pdf.