scholarly journals Of Pipes and Patches: Listening to augmented pipe organs

2019 ◽  
Vol 24 (1) ◽  
pp. 41-53
Author(s):  
Christophe d’Alessandro ◽  
Markus Noisternig

Pipe organs are complex timbral synthesisers in an early acousmatic setting, which have always accompanied the evolution of music and technology. The most recent development is digital augmentation: the organ sound is captured, transformed and then played back in real time. The present augmented organ project relies on three main aesthetic principles: microphony, fusion and instrumentality. Microphony means that sounds are captured inside the organ case, close to the pipes. Real-time audio effects are then applied to the internal sounds before they are played back over loudspeakers; the transformed sounds interact with the original sounds of the pipe organ. The fusion principle exploits the blending effect of the acoustic space surrounding the instrument; the room response transforms the sounds of many single-sound sources into a consistent and organ-typical soundscape at the listener’s position. The instrumentality principle restricts electroacoustic processing to organ sounds only, excluding non-organ sound sources or samples. This article proposes a taxonomy of musical effects. It discusses aesthetic questions concerning the perceptual fusion of acoustic and electronic sources. Both extended playing techniques and digital audio can create musical gestures that conjoin the heterogeneous sonic worlds of pipe organs and electronics. This results in a paradoxical listening experience of unity in the diversity: the music is at the same time electroacoustic and instrumental.

2021 ◽  
Vol 263 (1) ◽  
pp. 5071-5082
Author(s):  
William D'Andrea Fonseca ◽  
Davi Rocha Carvalho ◽  
Jacob Hollebon ◽  
Paulo Henrique Mareze ◽  
Filippo Maria Fazi

Binaural rendering is a technique that seeks to generate virtual auditory environments that replicate the natural listening experience, including the three-dimensional perception of spatialized sound sources. As such, real-time knowledge of the listener's position, or more specifically, their head and ear orientations allow the transfer of movement from the real world to virtual spaces, which consequently enables a richer immersion and interaction with the virtual scene. This study presents the use of a simple laptop integrated camera (webcam) as a head tracker sensor, disregarding the necessity to mount any hardware to the listener's head. The software was built on top of a state-of-the-art face landmark detection model, from Google's MediaPipe library for Python. Manipulations to the coordinate system are performed, in order to translate the origin from the camera to the center of the subject's head and adequately extract rotation matrices and Euler angles. Low-latency communication is enabled via User Datagram Protocol (UDP), allowing the head tracker to run in parallel and asynchronous with the main application. Empirical experiments have demonstrated reasonable accuracy and quick response, indicating suitability to real-time applications that do not necessarily require methodical precision.


Author(s):  
Armin Schäfer ◽  
Julia Kursell

This chapter investigates concepts of space in French composer Gérard Grisey’s music. From the 1970s onward, he used sound spectrograms, introducing the compositional technique of “spectralism,” which can be rooted in Arnold Schoenberg’s concept ofKlangfarbe. The cycleLes Espaces acoustiques(1974–1985) uses this technique to create a sequence of musical forms that grow from the acoustic seed of a single tone. The cycle can be traced back to a new role for acoustic space, which emerged in early atonal composition. Grisey confronts the natural order of acoustic space with the human order of producing and perceiving sounds. The dis-symmetry between these two orders of magnitude is further explored in Grisey’sLe Noir de l’Étoile(1990) for six percussionists, magnetic tape, and real-time astrophysical signals. This piece unfolds a triadic constellation of spatial orders where human perception and performance are staged between musical micro-space and cosmic marco-space.


2011 ◽  
Vol 2-3 ◽  
pp. 123-126
Author(s):  
Bin Xu ◽  
Dan Yang ◽  
Yun Yi Zhang ◽  
Xu Wang

In this paper, we proposed a peripheral sound visualization method based on improved ripple mode for the deaf. In proposed mode, we designed the processes of transforming sound intensity and exterminating the locations of sound sources. We used power spectrum function to determine the sound intensity. ARTI neural network was subtly applied to identify which kind of the real-time input sound signals and to display the locations of the sound sources. We present the software that aids the development of peripheral displays and four sample peripheral displays are used to demonstrate our toolkit’s capabilities. The results show that the proposed ripple mode correctly showed the information of combination of the sound intensity and location of the sound source and ART1 neural network made accurate identifications for input audio signals. Moreover, we found that participants in the research were more likely to achieve more information of locations of sound sources.


Sign in / Sign up

Export Citation Format

Share Document