Response Quality of Confidential and Complex Questions

Author(s):  
Donald R. Williams ◽  
Ronald D. Taylor ◽  
John H. Summey
Keyword(s):  
Science ◽  
1973 ◽  
Vol 182 (4108) ◽  
pp. 113-115
Author(s):  
William Kruskal
Keyword(s):  

2019 ◽  
Vol 90 (5) ◽  
pp. 463-472
Author(s):  
Shinya Masuda ◽  
Takayuki Sakagami ◽  
Masahiro Morii
Keyword(s):  

2020 ◽  
Vol 30 (6) ◽  
pp. 1763-1781
Author(s):  
Louisa Ha ◽  
Chenjie Zhang ◽  
Weiwei Jiang

PurposeLow response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of this study is to compare the effects of using interviewers to recruit participants in computer-assisted self-administered interviews (CASI) vs computer-assisted personal interviews (CAPI) and smartphones vs computers on participation rate and web survey response quality.Design/methodology/approachTwo field experiments using two similar media use studies on US college students were conducted to compare response quality in different survey modes and response devices.FindingsResponse quality of computer entry was better than smartphone entry in both studies for open-ended and closed-ended question formats. Device effect was only significant on overall completion rate when interviewers were present.Practical implicationsSurvey researchers are given guidance how to conduct online surveys using different devices and choice of question format to maximize survey response quality. The benefits and limitations of using an interviewer to recruit participants and smartphones as web survey response devices are discussed.Social implicationsIt shows how computer-assisted self-interviews and smartphones can improve response quality and participation for underprivileged groups.Originality/valueThis is the first study to compare response quality in different question formats between CASI, e-mailed delivered online surveys and CAPI. It demonstrates the importance of human factor in creating sense of obligation to improve response quality.


2004 ◽  
Vol 15 (1) ◽  
pp. 21-36 ◽  
Author(s):  
Elisabeth Deutskens ◽  
Ko de Ruyter ◽  
Martin Wetzels ◽  
Paul Oosterveld

2013 ◽  
Vol 32 (2) ◽  
pp. 256-269 ◽  
Author(s):  
Vidal Díaz de Rada ◽  
Juan Antonio Domínguez-Álvarez
Keyword(s):  

2007 ◽  
Vol 66 (2) ◽  
pp. 103-107 ◽  
Author(s):  
Matthias Spörrle ◽  
Beatrice Gerber-Braun ◽  
Friedrich Försterling

Research on questionnaire design has shown that respondents take a questionnaire’s formal aspects into account when formulating their answers to closed-ended questions (e.g., rating scales, multiple-choice questions). As similar research on open-response questions has been scanty, the objective of this study was to investigate the systematic influence of formal features of open-response questions on response behavior. Specifically, in two studies using different topics of opinion, we examined how the responses to open-ended questions vary as a function of the number of lines provided for the response. In both studies, increasing the number of response lines resulted in a constant increase in response length (quantity). It also resulted in an increase in the number of arguments in a response (quality) when only a few response lines were provided, which plateaued when many lines were provided. Overall, these results demonstrate that formal features of open-response questions implicitly communicate the expected quantity and quality of the answer.


Field Methods ◽  
2017 ◽  
Vol 29 (4) ◽  
pp. 365-382 ◽  
Author(s):  
Jan Karem Höhne ◽  
Stephan Schlosser ◽  
Dagmar Krebs

Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions, in contrast, offer content-related response categories, implying less cognitive processing. To investigate the respective cognitive effort and response quality associated with A/D and IS questions, we conducted a web-based experiment with 1,005 students. Cognitive effort was assessed by response times and answer changes. Response quality, in contrast, was assessed by different indicators such as dropouts. According to our results, single IS questions require higher cognitive effort than single A/D questions in terms of response times. Moreover, our findings show substantial differences in processing single and grid questions.


1982 ◽  
Vol 10 (2) ◽  
pp. 251-262 ◽  
Author(s):  
Achilles A. Armenakis ◽  
William L. Lett

Sign in / Sign up

Export Citation Format

Share Document