survey quality
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 20)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
pp. 557-578
Author(s):  
Sharon L. Lohr
Keyword(s):  

2021 ◽  
pp. 000276422110216
Author(s):  
J. Craig Jenkins ◽  
Joonghyun Kwak

A common claim about the affluent democracies is that protest is trending, becoming more legitimate and widely used by all political contenders. In the new democracies, protest is seen as having contributed to democratization, but growing apathy has led to protest decline while in authoritarian regimes protest may be spurring more democratization. Assessing these ideas requires comparative trend data covering 15 or more years but constructing such data confronts problems. The major problem is that the most available survey item asks “have you ever joined (lawful) demonstrations,” making it difficult to time when this protest behavior occurred. We advance a novel method for timing these “ever” responses by focusing on young adults (aged 18-23 years), who are likely reporting on participation within the past 5 years. Drawing on the Survey Data Recycling harmonized data set, we use a multilevel model including harmonization and survey quality controls to create predicted probabilities for young adult participation (576 surveys, 119 countries, 1966-2010). Aggregating these to create country-year rate estimates, these compare favorably with overlapping estimates from surveys asking about “the past 5 years or so” and event data from the PolDem project. Harmonization and survey quality controls improve these predicted values. These data provide 15+ years trend estimates for 60 countries, which we use to illustrate the possibilities of estimating comparative protest trends.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


2021 ◽  
Author(s):  
Sven Stadtmüller ◽  
Henning Silber ◽  
Christoph beuthner

Today, there are more survey results available than ever before. This increase in survey data is, however, accompanied by a decline in survey quality. Thus, it is more likely than in the past that citizens and politicians get a biased picture of public opinion when relying on survey results. Those misperceptions can have worrying consequences for political discourse and decision-making. With the present study, we aim to investigate to what extent the public draws on survey quality information when evaluating the trustworthiness of a survey result. To explore this research question, we implemented a vignette experiment in an online panel survey (n = 3,313) in which each respondent was confronted with four different, randomly assigned descriptions of a survey and then asked to evaluate the trustworthiness of the respective survey result. The survey descriptions varied regarding the methodological information provided (i.e., sample size, sampling method, and sample balance). The results showed that survey quality information only had a minor effect on the perceptions of trust compared to respondents’ characteristics, such as pre-existing opinions on the topic or general trust in science. Yet, trust in the survey result was significantly influenced by the sample size and sample balance, but not by the sampling method. Finally, in line with information processing theory, the relevance of survey quality information increases with the cognitive abilities of the respondent.


2021 ◽  
pp. 089443932098525
Author(s):  
Jannes Jacobsen ◽  
Simon Kühne

Panel attrition poses major threats to the survey quality of panel studies. Many features have been introduced to keep panel attrition as low as possible. Based on a random sample of refugees, a highly mobile population, we investigate whether using a mobile phone application improves address quality and response behavior. Various features, including geo-tracking, collecting email addresses and adress changes, are tested. Additionally, we investigate respondent and interviewer effects on the consent to download the app and sharing GPS geo-positions. Our findings show that neither geo-tracking nor the provision of email addresses nor the collection of address changes through the app improves address quality substantially. We further show that interviewers play an important role in convincing the respondents to install and use the app, whereas respondent characteristics are largely insignificant. Our findings provide new insights into the usability of mobile phone applications and help determine whether they are a worthwhile tool to decrease panel attrition.


2020 ◽  
Vol 9 (12) ◽  
pp. 749
Author(s):  
Matthew S. O’Banion ◽  
Michael J. Olsen ◽  
Jeff P. Hollenbeck ◽  
William C. Wright

Extensive gaps in terrestrial laser scanning (TLS) point cloud data can primarily be classified into two categories: occlusions and dropouts. These gaps adversely affect derived products such as 3D surface models and digital elevation models (DEMs), requiring interpolation to produce a spatially continuous surface for many types of analyses. Ultimately, the relative proportion of occlusions in a TLS survey is an indicator of the survey quality. Recognizing that regions of a scanned scene occluded from one scan position are likely visible from another point of view, a prevalence of occlusions can indicate an insufficient number of scans and/or poor scanner placement. Conversely, a prevalence of dropouts is ordinarily not indicative of survey quality, as a scanner operator cannot usually control the presence of specular reflective or absorbent surfaces in a scanned scene. To this end, this manuscript presents a novel methodology to determine data completeness by properly classifying and quantifying the proportion of the site that consists of point returns and the two types of data gaps. Knowledge of the data gap origin can not only facilitate the judgement of TLS survey quality, but it can also identify pooled water when water reflections are the main source of dropouts in a scene, which is important for ecological research, such as habitat modeling. The proposed data gap classification methodology was successfully applied to DEMs for two study sites: (1) A controlled test site established by the authors for the proof of concept of classification of occlusions and dropouts and (2) a rocky intertidal environment (Rabbit Rock) presenting immense challenges to develop a topographic model due to significant tidal fluctuations, pooled water bodies, and rugged terrain generating many occlusions.


2020 ◽  
Vol 6 (159) ◽  
pp. 147-152
Author(s):  
D. Kopytkov ◽  
G. Samchuk

The article deals with the problem of determining the transport fatigue of mass transit passengers as one of the human body states. The transport fatigue is proposed to be evaluated using the questionnaire method with subsequent quality assessment by the mathematical statistics methods.


Author(s):  
Martin Neil ◽  
Norman Fenton ◽  
Magda Osman ◽  
Scott McLachlan

AbstractWidely reported statistics on Covid-19 across the globe fail to take account of both the uncertainty of the data and possible explanations for this uncertainty. In this paper we use a Bayesian Network (BN) model to estimate the Covid-19 infection prevalence rate (IPR) and infection fatality rate (IFR) for different countries and regions, where relevant data are available. This combines multiple sources of data in a single model. The results show that Chelsea Mass. USA and Gangelt Germany have relatively higher infection prevalence rates (IPR) than Santa Clara USA, Kobe, Japan and England and Wales. In all cases the infection prevalence is significantly higher than what has been widely reported, with much higher community infection rates in all locations. For Santa Clara and Chelsea, both in the USA, the most likely IFR values are 0.3-0.4%. Kobe, Japan is very unusual in comparison with the others with values an order of magnitude less than the others at, 0.001%. The IFR for Spain is centred around 1%. England and Wales lie between Spain and the USA/German values with an IFR around 0.8%. There remains some uncertainty around these estimates but an IFR greater than 1% looks remote for all regions/countries. We use a Bayesian technique called ‘virtual evidence’ to test the sensitivity of the IFR to two significant sources of uncertainty: survey quality and uncertainty about Covid-19 death counts. In response the adjusted estimates for IFR are most likely to be in the range 0.3%-0.5%.


Sign in / Sign up

Export Citation Format

Share Document