survey error
Recently Published Documents


TOTAL DOCUMENTS

121
(FIVE YEARS 30)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
pp. 0739456X2110432
Author(s):  
Meadhbh Maguire

This article is concerned with two aspects of how planning practitioners use survey-derived data; how planners integrate the limitations of survey questionnaires into practice, and the prevalence of such data within planning. Using a web survey ( n = 201) and interviews ( n = 18) of Canadian municipal planners, I find that survey data are heavily relied on, but many planners do not seem to be aware of cognitive biases when designing surveys, and those that are, have little knowledge of how they ought to mitigate them. To develop planners’ understanding of these biases and improve the survey data they collect, quantitative methods courses within planning curricula could respond by expanding beyond statistical analysis to incorporate survey design and “the total survey error approach” of survey methodology.


2021 ◽  
Author(s):  
Sabrina Jasmin Mayer ◽  
Laura Scholaske

Surveys of specific target groups that are hard to survey are prone to errors and biases. In this paper, we use the Total Survey Error (TSE) framework and a study on unaccompanied refugee minors (URM) in Germany to discuss how a mixed-methods quantitative-dominant research design can address challenges of quantitative-only surveys of such groups. We show that unit nonresponse and measurement are two main levels of bias that can be partly supplemented by qualitative research. In addition, taking ethical considerations into account when researching URMs affects the quality of quantitative surveys. This effect cannot be avoided, but it should be classified by researchers. We conclude that surveying hard-to-survey populations benefits from a combination of quantitative surveys and semi-structured interviews.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


2021 ◽  
Vol 2020 (1) ◽  
pp. 739-749
Author(s):  
Adhi Candra Maulana ◽  
Nori Wilantika

Dengan misi menyediakan data statistik berkualitas, data yang disediakan oleh Badan Pusat Statistik harus mempunyai total survey error yang kecil. Total survey error mencakup sampling error dan nonsampling error, dengan nonsampling error yang berperan lebih besar. Dalam survei atau sensus yang dilaksanakan oleh BPS, nonsampling error ini sangat dipengaruhi oleh kualitas petugas lapangan yang direkrut oleh BPS dan biasa disebut dengan istilah “mitra statistik”. Untuk mendapatkan calon mitra statistik yang berkualitas baik perlu melalui proses uji penilaian atau uji kelayakan menjadi seorang calon mitra. Proses uji penilaian saat ini masih menggunakan sistem ujian Paper and Pencil Test (PPT). Tes dengan metode PPT ini memiliki beberapa permasalahan diantaranya dari sisi keamanan soal dan transparansi nilai peserta. Penelitian ini bertujuan untuk mengembangkan sebuah sistem tes berbasis komputer untuk mengatasi permasalahan pada sistem ujian saat ini. Metode pengembangan sistem menggunakan metode modified waterfall. Hasil dari penelitian adalah sebuah sistem tes berbasis komputer bernama SICATMiS yang dikembangkan dari OSS TCExam. Sistem memiliki 3 aktor dan 5 menu utama yang dapat berjalan di luar jaringan (online) maupun di dalam jaringan local. Sistem yang dikembangkan dievaluasi dengan dengan uji scenario, uji penerimaan pengguna, dan system usability scale. Hasil uji skenario menunjukkan dari 47 skenario yang dilakukan, semua fitur berhasil dijalankan dengan sempurna. Dari 14 pernyataan dalam uji penerimaan pengguna, 68% responden sangat setuju bahwa SICATMiS telah memenuhi kebutuhan pengguna. Evaluasi SUS menunjukan SICATMiS dapat diterima dengan sangat baik dan siap digunakan.


2020 ◽  
Author(s):  
Echo GQ Nelson ◽  
Maureen Murdoch ◽  
Siamak Noorbaloochi

Background: Total Survey Error is typically operationalized as non-response bias plus measurement error without considering sampling error’s contribution to total error. Bias’ impact on effect sizes is also not well described.Objective: To explore the risk of obtaining survey values importantly different from true population values through sampling error alone, to explore sampling error’s unique contribution to total survey error, and to identify how much non-response bias can be tolerated before odds ratios deviate significantly from the true value. Methods: Using R, we simulated a population of 20,000 “men” and “women” based on an actual population of Veterans. We assigned attributes of being exposed/unexposed to military combat or sexual trauma and of having or not having disability benefits for posttraumatic stress disorder (“service connection”). We then simulated multiple surveys using samples sizes and response rates commonly seen in survey research. Results: Through sampling error alone, individual sample prevalences differed from the true value by 10 full percentage points at probabilities between 2.7% (combat) and 55% (military sexual trauma) for sample sizes ≤ 300 (“men” and “women” combined). Mean sampling error frequently exceeded non-response bias. Across all sample size/response rate combinations (“men/women” combined), individual sampling errors ranged from -18.2% to 17.2% for combat, -40.9% to 41.8% for military sexual assault, and -45.9% to 69.1% for service connection. Modeling showed that biases as small as 1 percentage point within an individual cell of a 2X2 contingency table altered the odds ratio estimates between combat, military sexual trauma, and service connection substantially, whereas altering the marginal totals of the 2X2 table did not affect the odds ratio at all.Conclusions: Sampling error’s impact on total survey error can be substantial, while even small degrees of non-response bias can distort odds ratios. Back-of-the-envelope techniques could help investigators plan for and avoid these issues.


Author(s):  
Ting Yan

I review selected articles from the survey methodology literature on the consequences of asking sensitive questions in censuses and surveys, using a total survey error (TSE) framework. I start with definitions of sensitive questions and move to examination of the impact of including sensitive questions on various sources of survey error—specifically, survey respondents’ willingness to participate in a survey (unit nonresponse), their willingness to respond to next rounds of interviews (wave nonresponse), their likelihood to provide an answer to sensitive questions after agreeing to participate in the survey (item nonresponse), and the accuracy of respondents’ answers to sensitive questions (measurement error). I also review the simultaneous impact of sensitive questions on multiple sources of error in survey estimates and discuss strategies to mitigate the impact of asking sensitive questions on measurement errors. I conclude with a summary and suggestions for future research. Expected final online publication date for the Annual Review of Statistics and Its Application, Volume 8 is March 8, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
Kristen Olson ◽  
James Wagner ◽  
Raeda Anderson

Abstract Survey costs are a major driver of survey design decisions and thought to be related to survey errors. Despite their importance, no common language exists for discussing survey costs, nor are there established criteria for identifying which cost metrics are useful for which purposes. Past efforts to study survey costs may have been hampered by the notion that more reporting is better reporting. This article starts by introducing a typology for survey cost metrics defined by the type of cost (estimated, observed in records, and actually incurred), currency versus non-currency measures, and level of aggregation (total, by components, per unit, relative). We also suggest a set of criteria – errors in costs, generalizability, and the degree to which a cost measure is informative about survey error sources – for evaluating the utility of cost metrics. We illustrate the evaluative criteria with the cost metrics. We argue that clearly articulating types of survey costs and resetting these baseline evaluative criteria for the utility of different types of costs will help us expand research in this critical area. We conclude with recommendations for future research in costs within and across organizations.


Author(s):  
Marcus E. Berzofsky ◽  
Andrew Moore ◽  
G. Lance Couzens ◽  
Lynn Langton ◽  
Chris Krebs

We use a total survey error approach to examine and make recommendations on how to adjust for non-sampling error in longitudinal, mixed-mode surveys. Using data from the National Crime Victimization Survey (NCVS), we examine three major sources of non-sampling error: telescoping, mode effects, and fatigue. We present an assessment of each source of error from a total survey error perspective and propose alternative adjustments to adjust better for this error. Findings suggest that telescoping and fatigue are likely sources of error in the NCVS, but the use of mixed-modes is not. Furthermore, both telescoping and fatigue are present in longitudinal surveys and accounting for one but not the other results in estimates that under- or overestimate the measures of interest—in this case, the rate of crime in the United States.


Sign in / Sign up

Export Citation Format

Share Document