scholarly journals Response Burden and Data Quality in Business Surveys

2021 ◽  
Vol 37 (4) ◽  
pp. 811-836
Author(s):  
Marco Bottone ◽  
Lucia Modugno ◽  
Andrea Neri

Abstract Response burden has long been a concern for data producers. In this article, we investigate the relationship between some measures of actual and perceived burden and we provide empirical evidence of their association with data quality. We draw on two business surveys conducted by Banca d’Italia since 1970, which provide a very rich and unique source of information. We find evidence that the perceived burden is affected by actual burden but the latter is not the only driver. Our results also show a clear link between a respondent’s perceived effort and the probability of not answering some important questions (such as those relating to expectations of future investments and turnover) or of dropping out of the survey. On the contrary, we do not find significant effects on the quality of answers to quantitative questions such as business turnover and investments. Overall, these findings have implications for data producers that should target the perceived burden, besides the actual burden, to increase data quality.

2019 ◽  
Author(s):  
Emir Efendic ◽  
Philippe van de Calseyde ◽  
Anthony M Evans

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.


Author(s):  
Cheng Guo ◽  
Kelly Caine

Social Question & Answer (Q&A) sites are a unique source of health information that draw from personal, rather than professional experience. When people ask or answer questions about health using these sites, they may do so using their real name, or another type of identity such as pseudonymity (e.g., a username or nickname) or anonymity. People’s behavior may differ when they have a choice about which type of identity they choose, especially the context of different levels of sensitivity of content (e.g., health vs. non- health). In this work, we explore the relationship between different types of identity (e.g., anonymity and pseudonymity) and several Q&A metrics of user behavior on Yahoo Answers in the context of health and non-health content using path analysis. We find that health-related questions are more likely to be asked and answered anonymously. We also find that anonymous answers have more upvotes and downvotes than pseudonymous answers indicating more engagement. We conclude by suggesting that health Q&A sites and other online health communities may improve the quality of discussion by providing anonymity features and implementing moderation mechanisms.


Author(s):  
Hanyu Sun ◽  
Frederick G Conrad ◽  
Frauke Kreuter

Abstract Interviewer-respondent rapport is generally considered to be beneficial for the quality of the data collected in survey interviews; however, the relationship between rapport and data quality has rarely been directly investigated. We conducted a laboratory experiment in which eight professional interviewers interviewed 125 respondents to see how the rapport between interviewers and respondents is associated with the quality of data—primarily disclosure of sensitive information—collected in these interviews. It is possible that increased rapport between interviewers and respondents might motivate respondents to be more conscientious, increasing disclosure; alternatively, increased rapport might inhibit disclosure because presenting oneself unfavorably is more aversive if respondents have a positive relationship with the interviewer. More specifically, we examined three issues: (1) what the relationship is between rapport and the disclosure of information of varying levels of sensitivity, (2) how rapport is associated with item nonresponse, and (3) whether rapport can be similarly established in video-mediated and computer-assisted personal interviews (CAPIs). We found that (1) increased respondents’ sense of rapport increased disclosure for questions that are highly sensitive compared with questions about topics of moderate sensitivity; (2) increased respondents’ sense of rapport is not associated with a higher level of item nonresponse; and (3) there was no significant difference in respondents’ rapport ratings between video-mediated and CAPI, suggesting that rapport is just as well established in video-mediated interviews as it is in CAPI.


2021 ◽  
pp. 004912412098620
Author(s):  
Cornelia Eva Neuert

The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web questionnaire influences respondents’ focus of attention. In two experiments, part of an eye-tracking study and an online survey, respectively, a variety of indicators show that data quality is lower if the experimental question is positioned at the end rather than at the beginning of a questionnaire. Practical implications are discussed.


2016 ◽  
Vol 30 (2) ◽  
pp. 76-86 ◽  
Author(s):  
Judith Meessen ◽  
Verena Mainz ◽  
Siegfried Gauggel ◽  
Eftychia Volz-Sidiropoulou ◽  
Stefan Sütterlin ◽  
...  

Abstract. Recently, Garfinkel and Critchley (2013) proposed to distinguish between three facets of interoception: interoceptive sensibility, interoceptive accuracy, and interoceptive awareness. This pilot study investigated how these facets interrelate to each other and whether interoceptive awareness is related to the metacognitive awareness of memory performance. A sample of 24 healthy students completed a heartbeat perception task (HPT) and a memory task. Judgments of confidence were requested for each task. Participants filled in questionnaires assessing interoceptive sensibility, depression, anxiety, and socio-demographic characteristics. The three facets of interoception were found to be uncorrelated and interoceptive awareness was not related to metacognitive awareness of memory performance. Whereas memory performance was significantly related to metamemory awareness, interoceptive accuracy (HPT) and interoceptive awareness were not correlated. Results suggest that future research on interoception should assess all facets of interoception in order to capture the multifaceted quality of the construct.


2002 ◽  
Author(s):  
R. Arnold ◽  
A. V. Ranchor ◽  
N. H. T. ten Hacken ◽  
G. H. Koeter ◽  
V. Otten ◽  
...  

2020 ◽  
Vol 29 (12) ◽  
pp. 52-58
Author(s):  
E.P. Meleshkina ◽  
◽  
S.N. Kolomiets ◽  
A.S. Cheskidova ◽  
◽  
...  

Objectively and reliably determined indicators of rheological properties of the dough were identified using the alveograph device to create a system of classifications of wheat and flour from it for the intended purpose in the future. The analysis of the relationship of standardized quality indicators, as well as newly developed indicators for identifying them, differentiating the quality of wheat flour for the intended purpose, i.e. for finished products. To do this, we use mathematical statistics methods.


Sign in / Sign up

Export Citation Format

Share Document