scholarly journals Erratum to ‘Why Do Respondents Break Off Web Surveys and Does It Matter? Results From Four Follow-up Surveys’

2015 ◽  
Vol 27 (3) ◽  
pp. 446-446
Keyword(s):  
Author(s):  
Nicholas Meese ◽  
Juani Swart ◽  
Richard Vidgen ◽  
Philip Powell ◽  
Chris McMahon

Web-based approaches are increasingly being used for carrying out surveys, for example in research or to obtain user feedback in product and systems development. However, the drawbacks of web surveying are often overlooked. Errors in web surveys can be related to sampling, coverage, measurement, and non-response issues. Low response rates and non-response bias are particularly important for web-based surveys. This paper reports on a web-based survey in an international engineering consultancy, aimed at eliciting feedback on the development of systems to support sustainable engineering, that produced a low response rate. To investigate the reasons for this, a follow-up survey was conducted by telephone. The majority of those questioned were unaware of the original survey. The telephone survey showed that reasons for non-completion by those who were aware may be categorized as resources issues, relevance, and fatigue. Differences between those who were aware of the original survey and those who were not are explored and a gap is found between action and intention, i.e. good intentions to complete a survey are very unlikely to translate into action and completed surveys. The paper concludes with practical guidance for administering web-based surveys and observations on the merits of telephone surveys.


2019 ◽  
pp. 004912411985237 ◽  
Author(s):  
Florian Keusch ◽  
Mariel M. Leonard ◽  
Christoph Sajons ◽  
Susan Steiner

Researchers attempting to survey refugees over time face methodological issues because of the transient nature of the target population. In this article, we examine whether applying smartphone technology could alleviate these issues. We interviewed 529 refugees and afterward invited them to four follow-up mobile web surveys and to install a research app for passive mobile data collection. Our main findings are as follows: First, participation in mobile web surveys declines rapidly and is rather selective with significant coverage and nonresponse biases. Second, we do not find any factor predicting types of smartphone ownership, and only low reading proficiency is significantly correlated with app nonparticipation. However, obtaining sufficiently large samples is challenging—only 5 percent of the eligible refugees installed our app. Third, offering a 30 Euro incentive leads to a statistically insignificant increase in participation in passive mobile data collection.


2019 ◽  
pp. 089443931988870
Author(s):  
Melanie Revilla ◽  
Mick P. Couper

More and more respondents are answering web surveys using mobile devices. Mobile respondents tend to provide shorter responses to open questions than PC respondents. Using voice recording to answer open-ended questions could increase data quality and help engage groups usually underrepresented in web surveys. Revilla, Couper, Bosch, and Asensio showed that in particular the use of voice recording still presents many challenges, even if it could be a promising tool. This article reports results from a follow-up experiment in which the main goals were to (1) test whether different instructions on how to use the voice recording tool reduce technical and understanding problems, and thereby reduce item nonresponse while preserving data quality and the evaluation of the tool; (2) test whether nonresponse due to context can be reduced by using a filter question, and how this affects data quality and the tool evaluation; and (3) understand which factors affect nonresponse to open-ended questions using voice recording, and if these factors also affect data quality and the evaluation of the tool. The experiment was implemented within a smartphone web survey in Spain focused on Android devices. The results suggest that different instructions did not affect nonresponse to the open questions and had little effect on data quality for those who did answer. Introducing a filter to ensure that people were in a setting that permits voice recording seems useful. Despite efforts to reduce problems, a substantial proportion of respondents are still unwilling or unable to answer open questions using voice recording.


2014 ◽  
Vol 27 (2) ◽  
pp. 289-302 ◽  
Author(s):  
Markus Steinbrecher ◽  
Joss Roßmann ◽  
Jan Eric Blumenstiel
Keyword(s):  

2020 ◽  
pp. 089443931990093
Author(s):  
Jessica Daikeler ◽  
Ruben L. Bach ◽  
Henning Silber ◽  
Stephanie Eckman

Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents’ desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys.


2016 ◽  
Vol 35 (5) ◽  
pp. 654-665 ◽  
Author(s):  
Jonathan Mendelson ◽  
Jennifer Lee Gibson ◽  
Jennifer Romano-Bergstrom

Videos are often used in web surveys to assess attitudes. While including videos may allow researchers to test immediate reactions, there may be issues associated with displaying videos that are overlooked. In this article, we examine the effects of using video stimuli on responses in a probability-based web survey. Specifically, we evaluate the association between demographics, mobile device usage, and the ability to view videos; differences in ad recall based on whether respondents saw a video or still images of the video; whether respondents’ complete viewing of videos is related to presentation order; and the data quality of follow-up questions to the videos as a function of presentation order and complete viewing. Overall, we found that respondents using mobile browsers were less likely to be able to view videos in the survey. Those who could view videos were more likely to indicate recall compared to those who viewed images, and videos that were shown later in the survey were viewed in their entirety less frequently than those shown earlier. These results directly pertain to the legitimacy of using videos in web surveys to gather data about attitudes.


2019 ◽  
Vol 42 ◽  
Author(s):  
John P. A. Ioannidis

AbstractNeurobiology-based interventions for mental diseases and searches for useful biomarkers of treatment response have largely failed. Clinical trials should assess interventions related to environmental and social stressors, with long-term follow-up; social rather than biological endpoints; personalized outcomes; and suitable cluster, adaptive, and n-of-1 designs. Labor, education, financial, and other social/political decisions should be evaluated for their impacts on mental disease.


1999 ◽  
Vol 173 ◽  
pp. 189-192
Author(s):  
J. Tichá ◽  
M. Tichý ◽  
Z. Moravec

AbstractA long-term photographic search programme for minor planets was begun at the Kleť Observatory at the end of seventies using a 0.63-m Maksutov telescope, but with insufficient respect for long-arc follow-up astrometry. More than two thousand provisional designations were given to new Kleť discoveries. Since 1993 targeted follow-up astrometry of Kleť candidates has been performed with a 0.57-m reflector equipped with a CCD camera, and reliable orbits for many previous Kleť discoveries have been determined. The photographic programme results in more than 350 numbered minor planets credited to Kleť, one of the world's most prolific discovery sites. Nearly 50 per cent of them were numbered as a consequence of CCD follow-up observations since 1994.This brief summary describes the results of this Kleť photographic minor planet survey between 1977 and 1996. The majority of the Kleť photographic discoveries are main belt asteroids, but two Amor type asteroids and one Trojan have been found.


Author(s):  
D.G. Osborne ◽  
L.J. McCormack ◽  
M.O. Magnusson ◽  
W.S. Kiser

During a project in which regenerative changes were studied in autotransplanted canine kidneys, intranuclear crystals were seen in a small number of tubular epithelial cells. These crystalline structures were seen in the control specimens and also in regenerating specimens; the main differences being in size and number of them. The control specimens showed a few tubular epithelial cell nuclei almost completely occupied by large crystals that were not membrane bound. Subsequent follow-up biopsies of the same kidneys contained similar intranuclear crystals but of a much smaller size. Some of these nuclei contained several small crystals. The small crystals occurred at one week following transplantation and were seen even four weeks following transplantation. As time passed, the small crystals appeared to fuse to form larger crystals.


Sign in / Sign up

Export Citation Format

Share Document