Proceedings of the 23rd International Conference on Auditory Display - ICAD 2017
Latest Publications


TOTAL DOCUMENTS

33
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By The International Community For Auditory Display

096709044x

Author(s):  
Ruta R. Sardesai ◽  
Thomas M. Gable ◽  
Bruce N. Walker

Using auditory menus on a mobile device has been studied in depth with standard flicking, as well as wheeling and tapping interactions. Here, we introduce and evaluate a new type of interaction with auditory menus, intended to speed up movement through a list. This multimodal “sliding index” was compared to use of the standard flicking interaction on a phone, while the user was also engaged in a driving task. The sliding index was found to require less mental workload than flicking. What’s more, the way participants used the sliding index technique modulated their preferences, including their reactions to the presence of audio cues. Follow-on work should study how sliding index use evolves with practice.


Author(s):  
Marlene Mathew ◽  
Mert Cetinkaya ◽  
Agnieszka Roginska

Brain Computer Interface (BCI) methods have received a lot of attention in the past several decades, owing to the exciting possibility of computer-aided communication with the outside world. Most BCIs allow users to control an external entity such as games, prosthetics, musical output etc. or are used for offline medical diagnosis processing. Most BCIs that provide neurofeedback, usually categorize the brainwaves into mental states for the user to interact with. Raw brainwave interaction by the user is not usually a feature that is readily available for a lot of popular BCIs. If there is, the user has to pay for or go through an additional process for raw brain wave data access and interaction. BSoniq is a multi-channel interactive neurofeedback installation which, allows for real-time sonification and visualization of electroencephalogram (EEG) data. This EEG data provides multivariate information about human brain activity. Here, a multivariate event-based sonification is proposed using 3D spatial location to provide cues about these particular events. With BSoniq, users can listen to the various sounds (raw brain waves) emitted from their brain or parts of their brain and perceive their own brainwave activities in a 3D spatialized surrounding giving them a sense that they are inside their own heads.


Author(s):  
Takahiko Tsuchiya ◽  
Jason Freeman

Auditory-display research has had a largely unsolved challenge of balancing functional and aesthetic considerations. While functional designs tend to reduce musical expressivity for the fidelity of data, aesthetic or musical sound organization arguably has a potential for representing multi-dimensional or hierarchical data structure with enhanced perceptibility. Existing musical designs, however, generally employ nonlinear or interpretive mappings that hinder the assessment of functionality. The authors propose a framework for designing expressive and complex sonification using small timescale musical hierarchies, such as the harmony and timbral structures, while maintaining data integrity by ensuring a close-to-the-original recovery of the encoded data utilizing descriptive analysis by a machine listener.


Author(s):  
Jon Bellona ◽  
Lin Bai ◽  
Luke Dahl ◽  
Amy LaViers

Since people often communicate internal states and intentions through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed “expressive.” However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents an application for synthesizing sounds that match various movement qualities. Its design is based on an empirical study analyzing sound and movement qualities, where movement qualities are parametrized according to Laban’s Effort System. Our results suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented here and discussed within the context of designing movement-quality-to-sound-quality mappings in our sound synthesis application. This application will be used in future work testing user perceptions of expressive movements with synchronous sounds.


Author(s):  
Steven Landry ◽  
Myounghoon Jeon

Given that embodied interaction is widespread in Human-Computer Interaction, interests on the importance of body movements and emotions are gradually increasing. The present paper describes our process of designing and testing a dancer sonification system using a participatory design research methodology. The end goal of the dancer sonification project is to have dancers generate aesthetically pleasing music in real-time based on their dance gestures, instead of dancing to prerecorded music. The generated music should reflect both the kinetic activities and affective contents of the dancer’s movement. To accomplish these goals, expert dancers and musicians were recruited as domain experts in affective gesture and auditory communication. Much of the dancer sonification literature focuses exclusively on describing the final performance piece or the techniques used to process motion data into auditory control parameters. This paper focuses on the methods we used to identify, select, and test the most appropriate motion to sound mappings for a dancer sonification system.


Author(s):  
Adrian Jäger ◽  
Aristotelis Hadjakos

Navigation in audio-only first person adventure games is challenging since the user has to rely exclusively on his or her sense of hearing to localize game objects and navigate in the virtual world. In this paper we report on observations that we made during the iterative design process for such a game and the results of the final evaluation. In particular we argue to provide a sufficient number of unique sound sources since players do not use a mental map of the virtual place for navigating but instead move from sound source to sound source in a more linear fashion.


Author(s):  
Brianna J. Tomlinson ◽  
R. Michael Winters ◽  
Christopher Latina ◽  
Smruthi Bhat ◽  
Milap Rane ◽  
...  

Informal learning environments (ILEs) like museums incorporate multi-modal displays into their exhibits as a way to engage a wider group of visitors, often relying on tactile, audio, and visual means to accomplish this. Planetariums, however represent one type of ILE where a single, highly visual presentation modality is used to entertain, inform, and engage a large group of users in a passive viewing experience. Recently, auditory displays have been used as a supplement or even an alternative to visual presentation of astronomy concepts, though there has been little evaluation of those displays. Here, we designed an auditory model of the solar system and created a planetarium show, which was later presented at a local science center. Attendees evaluated the performance on helpfulness, interest, pleasantness, understandability, and relatability of the sounds' mappings. Overall, attendees rated the solar system and planetary details very highly, in addition to providing open-ended responses about their entire experience.


Author(s):  
Joseph W. Newbold ◽  
Nadia Bianchi-Berthouze ◽  
Nicolas E. Gold

Physical activity is important for a healthy lifestyle. However, it can be hard to stay engaged with exercise and this can often lead to avoidance. Sonification has been used to support physical activity through the optimisation/correction of movement. Though previous work has shown how sonification can improve movement execution and motivation, the specific mechanisms of motivation have yet to be investigated in the context of challenging exercises. We investigate the role of music expectancy as a way to leverage people’s implicit and embodied understanding of music within move- ment sonification to provide information on technique while also motivating continuation of movement and rewarding its completion. The paper presents two studies showing how this musically-informed sonification can be used to support the squat movement. The results show how musical expectancy impacted people’s perception of their own movement, in terms of reward, motivation and movement behaviour and the way in which they moved.


Author(s):  
Daniel Verona ◽  
S. Camille Peres

Historically, many sonification designs that have been used for data analysis purposes have been based on data characteristics and have not been explicitly based on the listener’s task. These sonification designs have often been described as annoying, confusing, or fatiguing. In the absence of a generally accepted theoretical framework for sonification design, there is a need for improvements in sonification design as well as a need for empirical evaluation of task- based sonification designs. This research focuses on surface electromyography (sEMG) sonification and two sEMG data analysis tasks: determining which of two muscles contracts first and which of two muscles exhibits a higher exertion level. Both of these tasks were analyzed using a task analysis technique known as GOMS (Goals, Operators, Methods, Selection Rules) and two sonification designs were created based on the results of these task analyses. Two Data-based sEMG sonification designs were then taken from the sEMG sonification literature, and the four designs (2 Task-based and 2 Data-based) were empirically compared. Significant effects of sonification design on listener performance were found, with listeners scoring more accurately using the Task-based sonification designs. Based on these results, we argue for wider application of task analysis methods to sonification design and for the inclusion of task analysis methods into a generally accepted theoretical framework for sonification design.


Author(s):  
Joseph J. Schlesinger ◽  
Elizabeth Reynolds ◽  
Brittany Sweyer ◽  
Alyna Pradhan

Free-field auditory medical alarms, although widely present in intensive care units, have created many hazards for both patients and clinicians in this environment. The harsh characteristics of the alarm noise profile combined with the frequency at which they sound throughout the ICU have created discomfort for the patients and contribute to psychological problems, such as PTSD and delirium. This frequency-selective silencing device seeks to attenuate these problems by removing the alarm sounds from the patient perspective. Patients do not need to hear these alarms as the alarms primarily serve to alert clinicians; therefore, this device, using a Raspberry Pi and digital filters, removes the alarm sounds present in the environment while transmitting other sounds to the patient without distortion. This allows for patients to hear everything occurring around them and to communicate effectively without experiencing the negative consequences of audible alarms.


Sign in / Sign up

Export Citation Format

Share Document