Proceedings of the 22nd International Conference on Auditory Display - ICAD 2016
Latest Publications


TOTAL DOCUMENTS

33
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By The International Community For Auditory Display

0967090431

Author(s):  
Ridwan A. Khan ◽  
Ram K. Avvari ◽  
Katherine Wiykovics ◽  
Pooja Ranay ◽  
Myounghoon Jeon

Memorable life events are important to form the present selfimage. Looking back on these memories provides an opportunity to ruminate meaning of life and envision future. Integrating the life-log concept and auditory graphs, we have implemented a mobile application, “LifeMusic”, which helps people reflect their memories by listening to their life event sonifcation that is synchronous to these memories. Reflecting the life events through LifeMusic can relieve users of the present and have them journey to the past moments and thus, they can keep balance of emotions in the present life. In the current paper, we describe the implementation and workflow of LifeMusic and briefly discuss focus group results, improvements, and future works.


Author(s):  
Fiore Martin ◽  
Oussama Metatla ◽  
Nick Bryan-Kinns ◽  
Tony Stockman

This paper presents the Accessible Spectrum Analyser (ASA) developed as part of the DePic project (Design Patterns for Inclusive collaboration) at Queen Mary University of London. The ASA uses sonification to provide an accessible representation of frequency spectra to visually impaired audio engineers. The software is free and open source and is distributed as a VST plug-in under OSX and Windows. The aim of reporting this work at the ICAD 2016 conference is to solicit feedback about the design of the present tool and its more generalized counterpart, as well as to invite ideas for other possible applications where it is thought that auditory spectral analysis may be useful, for example in situations where line of sight is not always possible.


Author(s):  
Amit Barde Brock ◽  
Matt Ward ◽  
William S. Helton ◽  
Mark Billinghurst

Attention redirection trials were carried out using a wearable interface incorporating auditory and visual cues. Visual cues were delivered via the screen on the Recon Jet – a wearable computer resembling a pair of glasses – while auditory cues were delivered over a bone conduction headset. Cueing conditions included the delivery of individual cues, both auditory and visual, and in combination with each other. Results indicate that the use of an auditory cue drastically decreases target acquisition times. This is true especially for targets that fall outside the visual field of view. While auditory cues showed no difference when paired with any of the visual cueing conditions for targets within the field of view of the user, for those outside the field of view a significant improvement in performance was observed. The static visual cue paired with the binaurally spatialised, dynamic auditory cue appeared to provide the best performance in comparison to any other cueing conditions. In the absence of a visual cue, the binaurally spatialised, dynamic auditory cue performed the best.


Author(s):  
Ryan McGee ◽  
David Rogers

Seismic events are physical vibrations induced in the earth’s crust which follow the general wave equation, making seismic data naturally conducive to audification. Simply increasing the playback rates of seismic recordings and rescaling the amplitude values to match those of digital audio samples (straight audification) can produce eerily realistic door slamming and explosion sounds. While others have produced a plethora of sucha udifications for international seismic events (i.e. earthquakes), the resulting sounds, while distinct to the trained auditory scientist, often lack enough variety to produce multiple instrumental timbres for the creation of engaging music for the public. This paper discusses approaches of sonification processing towards eventual musification of seismic data, beginning with straight audification and resulting in several musical compositions and new-media installations containing a variety of seismically derived timbres.


Author(s):  
Anna Bramwell-Dicks ◽  
Helen Petrie ◽  
Alistair Edwards

Music psychologists have frequently shown that music affects people’s behaviour. Applying this concept to work-related computing tasks has the potential to lead to improvements in a person’s productivity, efficiency and effectiveness. This paper presents two quantitative experiments exploring whether transcription typing performance is affected when hearing a music accompaniment that includes vocals. The first experiment showed that classifying the typists as either slow or fast ability is important as there were significant interaction effects once this between group factor was included, with the accuracy of fast typists reduced when the music contained vocals. In the second experiment, a Dutch transcription typing task was added to manipulate task difficulty and the volume of playback was included as a between groups independent variable. When typing in Dutch the fast typists’ speed was reduced with louder music. When typing in English the volume of music had little effect on typing speed for either the fast or slow typists. The fast typists achieved lower speeds when the loud volume music contained vocals, but with low volume music the inclusion of vocals in the background music did not have a noticeable affect on typing speed. The presence of vocals in the music reduced the accuracy of the text entry across the whole sample. Overall, these experiments show that the presence of vocals in background music reduces typing performance, but that we might be able to exploit instrumental music to improve performance in tasks involving typing with either low or high volume music.


Author(s):  
Teresa Marie Connors

In this paper, I offer a perspective into a creative research practice I have come to term as Ecological Performativity. This practice has evolved from a number of non-linear audiovisual installations that are intrinsically linked to geographical and everyday phenomena. The project is situated in ecological discourse that seeks to explore conditions and methods of co-creative processes derived from an intensive data-gathering procedure and immersion within the respective environments. Through research the techniques explored include computer vision, data sonification, live convolution and improvisation as a means to engage the agency of material and thus construct non-linear audiovisual installations. To contextualize this research, I have recently reoriented my practice within recent critical, theoretical, and philosophical discourses emerging in the humanities, sciences and social sciences generally referred to as ‘the nonhuman turn’. These trends currently provide a reassessment of the assumptions that have defined our understanding of the geo-conjunctures that make up life on earth and, as such, challenge the long-standing narrative of human exceptionalism. It is out of this reorientation that the practice of Ecological Performativity has evolved.


Author(s):  
John Dyer ◽  
Paul Stapleton ◽  
Matthew Rodger

Here we report early results from an experiment designed to investigate the use of sonification for the learning of a novel perceptual-motor skill. We find that sonification which employs melody is more effective than a strategy which provides only bare timing information. We additionally show that it might be possible to ‘refresh’ learning after performance has waned following training - through passive listening to the sound that would be produced by perfect performance. Implications of these findings are discussed in terms of general motor performance enhancement and sonic feedback design.


Author(s):  
Greg Schiemer

This paper describes an approach to sonification based on an iPhone app created for multiple users to explore a microtonal scale generated from harmonics using the combination product set method devised by tuning theorist Erv Wilson. The app is intended for performance by a large consort of hand-held mobile phones where phones are played collaboratively in a shared listening space. Audio consisting of handbells and sine tones is synthesised independently on each phone. Sound projection from each phone relies entirely on venue acoustics unaided by mains-powered amplification. It was designed to perform a microtonal composition called Transposed Dekany which takes the form of a chamber concerto in which a consort of players explore the properties of an microtonal scale. The consort subdivides into families of instruments that play in different pitch registers assisted by processes that are enabled and disabled at various stages throughout the performance. The paper outlines Wilson’s method, describes its current implementation and considers hypothetical sonification scenarios for implementation using different data with potential applications in the physical world.


Author(s):  
Jim Murphy ◽  
Dugal McKinnon ◽  
Mo H. Zareei

Lost Oscillations is a spatio-temporal sound art installation that allows users to explore the past and present of a city’s soundscape. Participants are positioned in the center of an octophonic speaker array; situated in the middle of the array is a touch-sensitive user interface. The user interface is a stylized representation of a map of Christchurch, NewZealand, with electrodes placed throughout the map. Upon touching an electrode, one of many sound recordings made at the electrode’s real-world location is chosen and played; users must stay in contact with the electrodes in order for the sounds to continue playing, requiring commitment from users in order to explore the soundscape. The sound recordings have been chosen to represent Christchurch’s development throughout its history, allowing participants to explore the evolution of the city from the early 20th Century through to its post-earthquake reconstruction. This paper discusses the motivations for Lost Oscillations before presenting the installation’s design, development, and presentation.


Author(s):  
S. Camille Peres ◽  
Daniel Verona

This paper presents a brief description of surface electromyography (sEMG), what it can be used for, as well as some of the problems associated with visual displays of sEMG data. Sonifications of sEMG data have shown potential for certain applications in data monitoring and movement training, however there are still challenges related to the design of these sonifications that need to be addressed. Our previous research has shown that different sonification designs resulted in better listener performance for different sEMG evaluation tasks (e.g. identifying muscle activation time vs. muscle exertion level). Based on this finding, we speculated that sonifications may benefit from being designed to be task-specific, and that integrating a task analysis into the sonification design process may help sonification designers identify intuitive and meaningful sonification designs. This paper presents a brief introduction to what a task analysis is, provides an example of how a task analysis can be used to inform sonification design, and outlines future research into a task-analysis-based approach to sonification design.


Sign in / Sign up

Export Citation Format

Share Document