Supplemental Material for Sharing a Driver’s Context With a Caller via Continuous Audio Cues to Increase Awareness About Driver State

Keyword(s):  
Author(s):  
Ruta R. Sardesai ◽  
Thomas M. Gable ◽  
Bruce N. Walker

Using auditory menus on a mobile device has been studied in depth with standard flicking, as well as wheeling and tapping interactions. Here, we introduce and evaluate a new type of interaction with auditory menus, intended to speed up movement through a list. This multimodal “sliding index” was compared to use of the standard flicking interaction on a phone, while the user was also engaged in a driving task. The sliding index was found to require less mental workload than flicking. What’s more, the way participants used the sliding index technique modulated their preferences, including their reactions to the presence of audio cues. Follow-on work should study how sliding index use evolves with practice.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-26
Author(s):  
Danielle Bragg ◽  
Katharina Reinecke ◽  
Richard E. Ladner

As conversational agents and digital assistants become increasingly pervasive, understanding their synthetic speech becomes increasingly important. Simultaneously, speech synthesis is becoming more sophisticated and manipulable, providing the opportunity to optimize speech rate to save users time. However, little is known about people’s abilities to understand fast speech. In this work, we provide an extension of the first large-scale study on human listening rates, enlarging the prior study run with 453 participants to 1,409 participants and adding new analyses on this larger group. Run on LabintheWild, it used volunteer participants, was screen reader accessible, and measured listening rate by accuracy at answering questions spoken by a screen reader at various rates. Our results show that people who are visually impaired, who often rely on audio cues and access text aurally, generally have higher listening rates than sighted people. The findings also suggest a need to expand the range of rates available on personal devices. These results demonstrate the potential for users to learn to listen to faster rates, expanding the possibilities for human-conversational agent interaction.


Author(s):  
Bruce N. Walker ◽  
Jeffrey Lindsay

If it is not possible to use vision when navigating through one's surroundings, moving safely and effectively becomes much harder. In such cases, non-speech audio cues can serve as navigation beacons, as well as denote features in the environment relevant to the user. This paper outlines and summarizes the development and evaluation of a System for Wearable Audio Navigation (SWAN), including an overview of completed, ongoing, and future research relating to the sounds used, the human-system interaction, output hardware, divided attention, and task effects.


1989 ◽  
Vol 8 (2) ◽  
pp. 162-169 ◽  
Author(s):  
Hans van der Mars

The effects of specific verbal praise by an experienced male physical education specialist on off-task behavior of three second-grade students were studied. A multiple baseline research design across subjects was used to assess the intervention, consisting of teacher praise aimed at the subjects’ class conduct and motor skill performance. To ensure that (a) the intervention would be implemented, and (b) that the praise would be contingent upon appropriate student conduct and skill performance, audio-cues were provided by way of prerecorded cues on microcassettes. Two boys and one girl in a second-grade class served as subjects. Off-task behavior and teacher praise data were collected from videotapes of 15 regular physical education classes. Results showed that the baseline levels of off-task levels were reduced significantly after introduction of the intervention for each subject. Specific verbal praise was effective in reducing off-task behavior of second-grade students in physical education.


2020 ◽  
Vol 85 ◽  
pp. 103068
Author(s):  
Ouren X. Kuiper ◽  
Jelte E. Bos ◽  
Cyriel Diels ◽  
Eike A. Schmidt
Keyword(s):  

i-com ◽  
2014 ◽  
Vol 13 (3) ◽  
Author(s):  
Anke Brock ◽  
Slim Kammoun ◽  
Marc Macé ◽  
Christophe Jouffrais

SummaryIn the absence of vision, mobility and orientation are challenging. Audio and tactile feedback can be used to guide visually impaired people. In this paper, we present two complementary studies on the use of vibrational cues for hand guidance during the exploration of itineraries on a map, and whole body-guidance in a virtual environment. Concretely, we designed wearable Arduino bracelets integrating a vibratory motor producing multiple patterns of pulses. In a first study, this bracelet was used for guiding the hand along unknown routes on an interactive tactile map. A wizard-of-Oz study with six blindfolded participants showed that tactons, vibrational patterns, may be more efficient than audio cues for indicating directions. In a second study, this bracelet was used by blindfolded participants to navigate in a virtual environment. The results presented here show that it is possible to significantly decrease travel distance with vibrational cues. To sum up, these preliminary but complementary studies suggest the interest of vibrational feedback in assistive technology for mobility and orientation for blind people.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Simone Spagnol ◽  
György Wersényi ◽  
Michał Bujacz ◽  
Oana Bălan ◽  
Marcelo Herrera Martínez ◽  
...  

Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.


Sign in / Sign up

Export Citation Format

Share Document