auditory display
Recently Published Documents


TOTAL DOCUMENTS

205
(FIVE YEARS 29)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol 28 (6) ◽  
pp. 1-34
Author(s):  
Jordan Wirfs-Brock ◽  
Alli Fam ◽  
Laura Devendorf ◽  
Brian Keegan

We present a first-person, retrospective exploration of two radio sonification pieces that employ narrative scaffolding to teach audiences how to listen to data. To decelerate and articulate design processes that occurred at the rapid pace of radio production, the sound designer and producer wrote retrospective design accounts. We then revisited the radio pieces through principles drawn from guidance design, data storytelling, visualization literacy, and sound studies. Finally, we speculated how these principles might be applied through interactive, voice-based technologies. First-person methods enabled us to access the implicit knowledge embedded in radio production and translate it to technologies of interest to the human–computer-interaction community, such as voice user interfaces that rely on auditory display. Traditionally, sonification practitioners have focused more on generating sounds than on teaching people how to listen; our process, however, treated sound and narrative as a holistic, sonic-narrative experience. Our first-person retrospection illuminated the role of narrative in designing to support people as they learn to listen to data.


2021 ◽  
Author(s):  
Brett L. Fiedler ◽  
Emily B. Moore ◽  
Tiara Sawyer ◽  
Bruce N. Walker

Author(s):  
Chihab Nadri ◽  
Seul Chan Lee ◽  
Siddhant Kekal ◽  
Yinjia Li ◽  
Xuan Li ◽  
...  

Highway-rail grade crossings (HRGCs) present multiple collision risks for motorists, suggesting the need for additional countermeasures to increase driver compliance. The use of in-vehicle auditory alerts (IVAAs) at HRGCs has been increasing, but there are limited standards or guidelines on how such alerts should be implemented. In the current study, we sought to investigate the effect of different auditory display variables, such as display type and acoustics, on subjective user assessments. We recruited 24 participants and asked them to rate 36 different IVAAs belonging to one of three display types (earcons—short synthetic tones, speech alerts, and hybrid alerts consisting of an earcon and speech) along 11 subjective ratings. Results showed that a hybrid alert led to better overall ratings for acceptance, safety, and semantic understanding when compared with earcon or speech alerts. Additional analyses revealed that semantic variables, such as speech order and gender, should be accounted for when designing IVAAs in an HRGC context. Hybrid IVAAs with spatial audio showed lower Urgency and Hazard level ratings. Findings of the current study can help inform the design of IVAAs for HRGCs.


Author(s):  
Malte Asendorf ◽  
Moritz Kienzle ◽  
Rachel Ringe ◽  
Fida Ahmadi ◽  
Debaditya Bhowmik ◽  
...  

This paper presents Tiltification, a multi modal spirit level application for smartphones. The non-profit app was produced by students in the master project “Sonification Apps” in winter term 2020/21 at the University of Bremen. In the app, psychoacoustic sonification is used to give feedback on the device’s rotation angles in two plane dimensions, allowing users to level furniture or take perfectly horizontal photos. Tiltification supplements the market of spirit level apps with the unique feature of auditory information processing. This provides for additional benefit in comparison to a physical spirit level and for more accessibility for visu- ally and cognitively impaired people. We argue that the distribution of sonification apps through mainstream channels is a contribution to establish sonification in the market and make it better known to users outside the scientific domain. We hope that the auditory display community will support us by using and recommending the app and by providing valuable feedback on the app functionality and design, and on our communication, advertisement and distribution strategy.


2021 ◽  
Vol 102 ◽  
pp. 04022
Author(s):  
William L. Martens ◽  
Michael Cohen

When seated users of multimodal augmented reality (AR) systems attempt to navigate unfamiliar environments, they can become disoriented during their initial travel through a remote environment that is displayed for them via that AR display technology. Even when the multimodal displays provide mutually coherent visual, auditory, and vestibular cues to the movement of seated users through a remote environment (such as a maze), those users may make errors in judging their own orientation and position relative to their starting point, and also may have difficulty determining what moves to make in order to return themselves to their starting point. In a number of investigations using multimodal AR systems featuring realtime servocontrolled movement of seated users, the relative contribution of spatial auditory display technology was examined across a variety of spatial navigation scenarios. The results of those investigations have implications for the effective use of the auditory component of a multimodal AR system in applications supporting spatial navigation through a physical environment.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Benjamin O’Brien ◽  
Romain Hardouin ◽  
Guillaume Rao ◽  
Denis Bertin ◽  
Christophe Bourdin

AbstractBased on a previous study that demonstrated the beneficial effects of sonification on cycling performance, this study investigated which kinematic and muscular activities were changed to pedal effectively. An online error-based sonification strategy was developed, such that, when negative torque was applied to the pedal, a squeak sound was produced in real-time in the corresponding headphone. Participants completed four 6-min cycling trials with resistance values associated with their first ventilatory threshold. Different auditory display conditions were used for each trial (Silent, Right, Left, Stereo), where sonification was only presented for 20 s at the start of minutes 1, 2, 3, and 4. Joint kinematics and right leg muscular activities of 10 muscles were simultaneously recorded. Our results showed participants were more effective at pedalling when presented sonification, which was consistent with previously reported findings. In comparison to the Silent condition, sonification significantly limited ankle and knee joint ranges of motion and reduced muscular activations. These findings suggest performance-based sonification significantly affected participants to reduce the complexity of the task by altering the coordination of the degrees of freedom. By making these significant changes to their patterns, participants improved their cycling performance despite lowering joint ranges of motion and muscular activations.


2020 ◽  
Vol 125 (5) ◽  
pp. 826-834
Author(s):  
Estrella Paterson ◽  
Penelope M. Sanderson ◽  
Isaac S. Salisbury ◽  
Felicity P. Burgmann ◽  
Ismail Mohamed ◽  
...  

2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Mark D. Temple

Abstract Background This paper describes a web based tool that uses a combination of sonification and an animated display to inquire into the SARS-CoV-2 genome. The audio data is generated in real time from a variety of RNA motifs that are known to be important in the functioning of RNA. Additionally, metadata relating to RNA translation and transcription has been used to shape the auditory and visual displays. Together these tools provide a unique approach to further understand the metabolism of the viral RNA genome. This audio provides a further means to represent the function of the RNA in addition to traditional written and visual approaches. Results Sonification of the SARS-CoV-2 genomic RNA sequence results in a complex auditory stream composed of up to 12 individual audio tracks. Each auditory motive is derived from the actual RNA sequence or from metadata. This approach has been used to represent transcription or translation of the viral RNA genome. The display highlights the real-time interaction of functional RNA elements. The sonification of codons derived from all three reading frames of the viral RNA sequence in combination with sonified metadata provide the framework for this display. Functional RNA motifs such as transcription regulatory sequences and stem loop regions have also been sonified. Using the tool, audio can be generated in real-time from either genomic or sub-genomic representations of the RNA. Given the large size of the viral genome, a collection of interactive buttons has been provided to navigate to regions of interest, such as cleavage regions in the polyprotein, untranslated regions or each gene. These tools are available through an internet browser and the user can interact with the data display in real time. Conclusion The auditory display in combination with real-time animation of the process of translation and transcription provide a unique insight into the large body of evidence describing the metabolism of the RNA genome. Furthermore, the tool has been used as an algorithmic based audio generator. These audio tracks can be listened to by the general community without reference to the visual display to encourage further inquiry into the science.


2020 ◽  
Author(s):  
Mark D. Temple

Abstract Background:This paper describes a web based tool that uses a combination of sonification and an animated display to inquire into the SARS-CoV-2 genome. The audio data is generated in real time from a variety of RNA motifs that are known to be important in the function of RNA. Additionally metadata relating to RNA translation and transcription has been used to shape the auditory and visual displays. Together these tools provide a unique approach to further understand the metabolism of the viral RNA genome. This audio provides a further means to represent the function of the RNA in addition to traditional written and visual approaches.Results:Sonification of the SARS-CoV-2 genomic RNA sequence results in a complex auditory stream composed of up to 12 individual audio tracks. Each auditory motive is derived from the actual RNA sequence or from metadata. This approach has been used to represent transcription or translation of the viral RNA genome. The display highlights the real-time interaction of functional RNA elements. The sonification of codons derived from all three reading frames of the viral RNA sequence in combination with literature derived metadata provide the framework for this display. Functional RNA motifs such as transcription regulatory sequences and stem loop regions have also been sonified. Using the tool, audio can be generated in real-time from either genomic or sub-genomic representations of the RNA. Given the large size of the viral genome, a collection of interactive buttons have been provided to navigate to regions of interest, such as cleavage regions in the polyprotein, untranslated regions or each gene. These tools are available through an internet browser and the user can interact with the data display in real time. Conclusion:The auditory display in combination with real-time animation of the process of translation and transcription provide a unique insight into the large body of evidence describing the metabolism of the RNA genome. Furthermore the tool has been used as an algorithmic based audio generator. These audio tracks can be listened to by the general community without reference to the visual display to encourage further inquiry into the concept.


Sign in / Sign up

Export Citation Format

Share Document