Improving the Use of Voice Recording in a Smartphone Survey

2019 ◽  
pp. 089443931988870
Author(s):  
Melanie Revilla ◽  
Mick P. Couper

More and more respondents are answering web surveys using mobile devices. Mobile respondents tend to provide shorter responses to open questions than PC respondents. Using voice recording to answer open-ended questions could increase data quality and help engage groups usually underrepresented in web surveys. Revilla, Couper, Bosch, and Asensio showed that in particular the use of voice recording still presents many challenges, even if it could be a promising tool. This article reports results from a follow-up experiment in which the main goals were to (1) test whether different instructions on how to use the voice recording tool reduce technical and understanding problems, and thereby reduce item nonresponse while preserving data quality and the evaluation of the tool; (2) test whether nonresponse due to context can be reduced by using a filter question, and how this affects data quality and the tool evaluation; and (3) understand which factors affect nonresponse to open-ended questions using voice recording, and if these factors also affect data quality and the evaluation of the tool. The experiment was implemented within a smartphone web survey in Spain focused on Android devices. The results suggest that different instructions did not affect nonresponse to the open questions and had little effect on data quality for those who did answer. Introducing a filter to ensure that people were in a setting that permits voice recording seems useful. Despite efforts to reduce problems, a substantial proportion of respondents are still unwilling or unable to answer open questions using voice recording.

2020 ◽  
pp. 089443931990093
Author(s):  
Jessica Daikeler ◽  
Ruben L. Bach ◽  
Henning Silber ◽  
Stephanie Eckman

Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents’ desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys.


2016 ◽  
Vol 35 (5) ◽  
pp. 654-665 ◽  
Author(s):  
Jonathan Mendelson ◽  
Jennifer Lee Gibson ◽  
Jennifer Romano-Bergstrom

Videos are often used in web surveys to assess attitudes. While including videos may allow researchers to test immediate reactions, there may be issues associated with displaying videos that are overlooked. In this article, we examine the effects of using video stimuli on responses in a probability-based web survey. Specifically, we evaluate the association between demographics, mobile device usage, and the ability to view videos; differences in ad recall based on whether respondents saw a video or still images of the video; whether respondents’ complete viewing of videos is related to presentation order; and the data quality of follow-up questions to the videos as a function of presentation order and complete viewing. Overall, we found that respondents using mobile browsers were less likely to be able to view videos in the survey. Those who could view videos were more likely to indicate recall compared to those who viewed images, and videos that were shown later in the survey were viewed in their entirety less frequently than those shown earlier. These results directly pertain to the legitimacy of using videos in web surveys to gather data about attitudes.


2018 ◽  
Vol 38 (2) ◽  
pp. 207-224 ◽  
Author(s):  
Melanie Revilla ◽  
Mick P. Couper ◽  
Oriol J. Bosch ◽  
Marc Asensio

We implemented an experiment within a smartphone web survey to explore the feasibility of using voice input (VI) options. Based on device used, participants were randomly assigned to a treatment or control group. Respondents in the iPhone operating system (iOS) treatment group were asked to use the dictation button, in which the voice was translated automatically into text by the device. Respondents with Android devices were asked to use a VI button which recorded the voice and transmitted the audio file. Both control groups were asked to answer open-ended questions using standard text entry. We found that the use of VI still presents a number of challenges for respondents. Voice recording (Android) led to substantially higher nonresponse, whereas dictation (iOS) led to slightly higher nonresponse, relative to text input. However, completion time was significantly reduced using VI. Among those who provided an answer, when dictation was used, we found fewer valid answers and less information provided, whereas for voice recording, longer and more elaborated answers were obtained. Voice recording (Android) led to significantly lower survey evaluations, but not dictation (iOS).


2019 ◽  
pp. 089443931987913
Author(s):  
Angelica M. Maineri ◽  
Ivano Bison ◽  
Ruud Luijkx

This study explores some features of slider bars in the context of a multi-device web survey. Using data collected among the students of the University of Trento in 2015 and 2016 by means of two web surveys ( N = 6,343 and 4,124) including two experiments, we investigated the effect of the initial position of the handle and the presence of numeric labels on answers provided using slider bars. It emerged that the initial position of the handle affected answers and that the number of rounded scores increased with numeric feedback. Smartphone respondents appeared more sensitive to the initial position of the handle but also less affected by the presence of numeric labels resulting in a lower tendency to rounding. Yet, outcomes on anchoring were inconclusive. Overall, no relevant differences have been detected between tablet and PC respondents. Understanding to what extent interactive and engaging tools such as slider bars can be successfully employed in multi-device surveys without affecting data quality is a key challenge for those who want to exploit the potential of web-based and multi-device data collection without undermining the quality of measurement.


2018 ◽  
Vol 37 (2) ◽  
pp. 196-213 ◽  
Author(s):  
Colleen A. McClain ◽  
Mick P. Couper ◽  
Andrew L. Hupp ◽  
Florian Keusch ◽  
Gregg Peterson ◽  
...  

This article reviews the existing literature on the collection of paradata in web surveys and extends the research in this area beyond the commonly studied measurement error problem to paradata that can be collected for managing and mitigating other important sources of error. To do so, and in keeping with the nature of paradata as process-oriented, we develop a typology of web survey paradata that incorporates information from all steps in the web survey process. We first define web survey paradata and describe general phases of paradata that run parallel to the steps in fielding a typical web survey. Within each phase, we enumerate several errors within the total survey error paradigm that can be examined with paradata, discussing case studies and motivating examples that illustrate innovative uses of paradata across the web survey process. We conclude with a discussion of open questions and opportunities for further work in this area. Overall, we develop this typology keeping technological advancements at the center of our discussion, but with flexibility to continuously incorporate new developments and trends in both technology and study design. Our typology encourages researchers to think about paradata as tools that can be used to investigate a broader range of outcomes than previously studied.


2018 ◽  
Vol 37 (2) ◽  
pp. 234-247 ◽  
Author(s):  
Hana Lee ◽  
Sunwoong Kim ◽  
Mick P. Couper ◽  
Youngje Woo

Smartphones have become very popular globally, and smartphone ownership has overtaken conventional cell phone ownership in many countries in recent years. With this rapid rise in smartphone penetration, researchers are looking at ways to conduct web surveys using smartphones. This is particularly true of student populations where smartphone penetration is very high and web surveys are already the norm. However, researchers are raising concerns about selection biases and measurement differences between PC and smartphone respondents. Questions also remain about comparisons to traditional interviewer-administered approaches. We designed an experimental comparison between a PC web survey, a smartphone web survey and a computer-assisted telephone interviewing (CATI) survey. This study was conducted using an annual survey of students at a large university in South Korea. The CATI (interviewer-administered) survey had a higher response rate, lower margins of error, and better representation of the student population than the two web (self-administered) modes, but at a higher cost. The CATI survey also had lower rates of item nonresponse. More significant differences were found between the modes for sensitive questions than for nonsensitive ones. This suggests that CATI surveys may still have a role to play in surveys of college students, even in a country with high rates of mobile technology adoption.


2021 ◽  
pp. 1-7
Author(s):  
Constantin Roder ◽  
Uwe Klose ◽  
Helene Hurth ◽  
Cornelia Brendle ◽  
Marcos Tatagiba ◽  
...  

<b><i>Background and Purpose:</i></b> Hemodynamic evaluation of moyamoya patients is crucial to decide the treatment strategy. Recently, CO<sub>2</sub>-triggered BOLD MRI has been shown to be a promising tool for the hemodynamic evaluation of moyamoya patients. However, the longitudinal reliability of this technique in follow-up examinations is unknown. This study aims to analyze longitudinal follow-up data of CO<sub>2</sub>-triggered BOLD MRI to prove the reliability of this technique for long-term control examinations in moyamoya patients. <b><i>Methods:</i></b> Longitudinal CO<sub>2</sub> BOLD MRI follow-up examinations of moyamoya patients with and without surgical revascularization have been analyzed for all 6 vascular territories retrospectively. If revascularization was performed, any directly (by the disease or the bypass) or indirectly (due to change of collateral flow after revascularization) affected territory was excluded based on angiography findings (group 1). In patients without surgical revascularization between the MRI examinations, all territories were analyzed (group 2). <b><i>Results:</i></b> Eighteen moyamoya patients with 39 CO<sub>2</sub> BOLD MRI examinations fulfilled the inclusion criteria. The median follow-up between the 2 examinations was 12 months (range 4–29 months). For 106 vascular territories analyzed in group 1, the intraclass correlation coefficient was 0.784, <i>p</i> &#x3c; 0.001, and for group 2 (84 territories), it was 0.899, <i>p</i> &#x3c; 0.001. Within the total follow-up duration of 140 patient months, none of the patients experienced a new stroke. <b><i>Conclusions:</i></b> CO<sub>2</sub> BOLD MRI is a promising tool for mid- and long-term follow-up examinations of cerebral hemodynamics in moyamoya patients. Systematic prospective evaluation is required prior to making it a routine examination.


Author(s):  
Charlotte J Hagerman ◽  
Rebecca K Hoffman ◽  
Sruthi Vaylay ◽  
Tonya Dodge

Abstract Implementation intentions are a goal-setting technique in which an individual commits to perform a particular behavior when a specific context arises. Recently, researchers have begun studying how implementation intention (II) interventions can facilitate antismoking efforts. The current systematic review synthesized results of experimental studies that tested the effect of an II intervention on smoking cognitions and behavior. Of 29 reviewed articles, 11 studies met inclusion criteria. Nine studies (81.8%) tested an II intervention as a cessation tool for current smokers, whereas two tested II interventions as a tool to prevent smoking among predominantly nonsmoking adolescents. A majority of the studies (66.7%) testing II interventions as a cessation tool reported a positive effect on cessation at long-term follow-up. Of the two studies testing II interventions as a tool for prevention, one study found a positive effect on long-term follow-up. Methodology varied between the studies, highlighting the discrepancies between what researchers consider “implementation intentions” to be. II interventions are a promising tool for antismoking efforts, but more research is necessary to determine the best methodology and the populations for whom this intervention will be most effective. Implications Brief, free, and easily scalable, II interventions to prevent smoking are highly attractive for antismoking efforts. This review outlines the circumstances under which II interventions have demonstrated effectiveness in helping people resist smoking cigarettes. We illuminate gaps in the existing literature, limitations, methodological discrepancies between studies, and areas for future study.


2021 ◽  
pp. 193672442199825
Author(s):  
Felix Bittmann

According to the theory of liking, data quality might be improved in face-to-face survey settings when there is a high degree of similarity between respondents and interviewers, for example, with regard to gender or age. Using two rounds of European Social Survey data from 25 countries including more than 70,000 respondents, this concept is tested for the dependent variables amount of item nonresponse, reluctance to answer, and the probability that a third adult person is interfering with the interview. The match between respondents and interviewers is operationalized using the variables age and gender and their statistical interactions to analyze how this relates to the outcomes. While previous studies can be corroborated, overall effect sizes are small. In general, item nonresponse is lower when a male interviewer is conducting the interview. For reluctance, there are no matching effects at all. Regarding the presence of other adults, only female respondents profit from a gender match, while age is without any effect. The results indicate that future surveys should weigh the costs and benefits of sociodemographic matching as advantages are probably small.


2007 ◽  
Vol 13 (2) ◽  
pp. 220-223 ◽  
Author(s):  
A Créange ◽  
I Serre ◽  
M Levasseur ◽  
D Audry ◽  
A Nineb ◽  
...  

We used a global positioning satellite technology odometer to determine the maximum objective walking distance capacity (MOWD) of patients with multiple sclerosis (MS). The MOWD correlated with Expanded Disability Status Scale (EDSS) score (r2 =0.41; P < 0.0001), the MSWS-12 scale (r2 = 0.46; P < 0.0001), time to walk 10 m (r2 = 0.51; P < 0.02) and walking speed (r2 =0.75; P < 0.001). Limitation of walking capacities was measurable up to 4550 m, strikingly above the 500-m limit of the EDSS. This objective odometer is a promising tool for evaluation and follow-up of patients with MS. Multiple Sclerosis 2007; 13: 220–223. http://msj.sagepub.com


Sign in / Sign up

Export Citation Format

Share Document