Proceedings of the 24th International Conference on Auditory Display - ICAD 2018
Latest Publications


TOTAL DOCUMENTS

29
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By The International Community For Auditory Display

0967090458

Author(s):  
Ridwan Ahmed Khan ◽  
Myounghoon Jeon ◽  
Tejin Yoon

Performing independent physical exercise is critical to maintain one's good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results.


Author(s):  
Doon MacDonald ◽  
Tony Stockman

This paper presents SoundTrAD, a method and tool for designing auditory displays for the user interface. SoundTrAD brings together ideas from user interface design and soundtrack composition and supports novice auditory display designers in building an auditory user interface. The paper argues for the need for such a method before going on to describe the fundamental structure of the method and construction of the supporting tools. The second half of the paper applies SoundTrAD to an autonomous driving scenario and demonstrates its use in prototyping ADs for a wide range of scenarios.


Author(s):  
Kelly Snook ◽  
Tarik Barri ◽  
Joachim Goßmann ◽  
Jason Potts ◽  
Margaret Schedel ◽  
...  

This paper describes the first steps in the creation of a new scientific and musical instrument to be released in 2019 for the 400th anniversary of Johannes Kepler's Harmonies of the World, which laid out his three laws of planetary motion and launched the field of modern astronomy. Concordia is a musical instrument that is modularly extensible, with its first software and hardware modules and underlying framework under construction now. The instrument is being designed in an immersive extended-reality (XR) environment with scientifically accurate visualizations and datatransparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler [1], extrapolated into visualizations by Hartmut Warm [2], and sonified. Principles of game design, data sonification/visualization optimization, and digital and analog music synthesis are used in the 3D presentation of information, the user interfaces (UX), and the controls of the instrument, with an optional DIY hardware “cockpit” interface. The instrument hardware and software are both designed to be modular and open source; Concordia can be played virtually without the DIY cockpit on a mobile platform, or users can build or customize their own interfaces, such as traditional keyboards, button grids, or gestural controllers with haptic feedback to interact with the system. It is designed to enable and reward practice and virtuosity through learning levels borrowed from game design, gradually building listening skills for decoding sonified information. The frameworks for uploading, verifying, and accessing the data; programming and verifying hardware and software module builds; tracking of instrument usage; and managing the instrument's economic ecosystem are being built using a combination of distributed computational technologies and peer-to-peer networks, including blockchain and the Interplanetary Filesystem (IPFS). Participants in Concordia fall into three general categories, listed here in decreasing degrees of agency: 1) Contributors; 2) Players; and 3) Observers. This paper lays out the broad structure of Concordia, describes progress on the first software module, and explores the creative, social, economic, and educational potential of Concordia as a new type of creative ecosystem.


Author(s):  
Christopher Jette ◽  
James H. J. Buchholz

This paper is a discussion of the composition Fluor Sonescence, which combines trombone, electronics and video. The trombone and electronics are a mediated sonification of the video component. The video is a high framerate capture of the air motions produced by sound emanating from a brass instrument. This video material is translated into sound and serves as the final video component. The paper begins with a description of the data collection process and an overview of the compositional components. This is followed by a detailed description of the composition of the three components of Fluor Sonescence, while a discussion of the technical and aesthetic concerns is interwoven throughout. There is a discussion of the relationship of Fluor Sonescence to earlier works of the composer and the capture method for source material. The paper is an overview of a specific sonification project that is part of a larger trajectory of work. Please see https://vimeo.com/255790972/ to hear and view Fluor Sonescence.


Author(s):  
Ivica Ico Bukvic ◽  
Gregory D. Earle

The following paper presents a cross-disciplinary snapshot of 21st century research in sonification and leverages the review to identify a new immersive exocentric approach to studying human capacity to perceive spatial aural cues. The paper further defines immersive exocentric sonification, highlights its unique affordances, and presents an argument for its potential to fundamentally change the way we understand and study the human capacity for location-aware audio pattern recognition. Finally, the paper describes an example of an externally funded research project that aims to tackle this newfound research whitespace.


Author(s):  
Shin’ichiro Uno ◽  
Yasuo Suzuki ◽  
Takashi Watanabe ◽  
Miku Matsumoto ◽  
Yan Wang

We developed software called SIPReS, which describes two-dimensional images with sound. With this system, visually-impaired people can tell the location of a certain point in an image just by hearing notes of frequency each assigned according to the brightness of the point a user touches on. It can run on Android smartphones and tablets. We conducted a small-scale experiment to see if a visually-impaired person can recognize images with SIPReS. In the experiment, the subject successfully recognized if there is an object or not. He also recognized the location information. The experiment suggests this application’s potential as image recognition software.


Author(s):  
Niklas Rönnberg ◽  
Jonas Löwgren

We present Photone, an interactive installation combining photographic images and musical sonification. An image is displayed, and a dynamic musical score is generated based on the overall color properties of the image and the color value of the pixel under the cursor. Hence, the music changes as the user moves the cursor. This simple approach turns out to have interesting experiential qualities in use. The composition of images and music invites the user to explore the combination of hues and textures, and musical sounds. We characterize the resulting experience in Photone as one of modal synergy where visual and auditory output combine holistically with the chosen interaction technique. This tentative finding is potentially relevant to further research in auditory displays and multimodal interaction.


Author(s):  
Takahiko Tsuchiya ◽  
Jason Freeman

Melodic sonification is one of the most common methods of sonification: data modulates the pitch of an audio synthesizer over time. This simple sonification, however, still raises questions about how we listen to a melody and perceive the motions and patterns characterized by the underlying data. We argue that analytical listening to such melodies may focus on different ranges of the melody at different times and discover the pitch (and data) relationships gradually over time and after repeated listening. To examine such behaviors in real-time listening to a melodic sonification, we conducted a user study employing interactive time and pitch resolution controls for the user. The study also examines the relationships of these changing time and pitch resolutions to perceived musicality. The results indicate a stronger general relationship between the time progression and the use of time-resolution control to analyze data characteristics, while the pitch resolution controls tend to have more correlation with subjective perceptions of musicality.


Author(s):  
David Worrall

The idea that sound can convey information predates the modern era, and certainly the computational present. Data sonification can be broadly described as the creation, study and use of the non-speech aural representation of information to convey information. As a field of contemporary enquiry and practice, data sonification is young, interdisciplinary and evolving; existing in parallel to the field of data visualization. Drawing on older practices such as auditing, and the use of information messaging in music, this paper provides an historical understanding of how sound and its representational deployment in communicating information has changed. In doing so, it aims to encourage a critical awareness of some of the sociocultural as well as technical assumptions often adopted in sonifying data, especially those that have been developed in the context of Western music of the last half-century or so.


Author(s):  
Michael Quinton ◽  
Iain McGregor ◽  
David Benyon

This study aims to provide an insight into effective sonification design. There are currently no standardized design methods, allowing a creative development approach. Sonifcation has been implemented in many different applications from scientific data representation to novel styles of musical expression. This means that methods of practice can vary a greatly. The indistinct line between art and science might be the reason why sonification is still sometimes deemed by scientists with a degree of scepticism. Some wellestablished practitioners argue that it is poor design that renders sonifications meaningless, in-turn having an adverse effect on acceptance. To gain a deeper understanding about sonification research and development 11 practitioners were interviewed. They were asked about methods of sonification design and their insights. The findings present information about sonification research and development, and a variety of views regarding sonification design practice.


Sign in / Sign up

Export Citation Format

Share Document