auditory space
Recently Published Documents


TOTAL DOCUMENTS

329
(FIVE YEARS 30)

H-INDEX

46
(FIVE YEARS 3)

2021 ◽  
Vol 12 (1) ◽  
pp. 173
Author(s):  
Akio Honda ◽  
Kei Maeda ◽  
Shuichi Sakamoto ◽  
Yôiti Suzuki

The deterioration of sound localization accuracy during a listener’s head/body rotation is independent of the listener’s rotation velocity (Honda et al., 2016). However, whether this deterioration occurs only during physical movement in a real environment remains unclear. In this study, we addressed this question by subjecting physically stationary listeners to visually induced self-motion, i.e., vection. Two conditions—one with a visually induced perception of self-motion (vection) and the other without vection (control)—were adopted. Under both conditions, a short noise burst (30 ms) was presented via a loudspeaker in a circular array placed horizontally in front of a listener. The listeners were asked to determine whether the acoustic stimulus was localized relative to their subjective midline. The results showed that in terms of detection thresholds based on the subjective midline, the sound localization accuracy was lower under the vection condition than under the control condition. This indicates that sound localization can be compromised under visually induced self-motion perception. These findings support the idea that self-motion information is crucial for auditory space perception and can potentially enable the design of dynamic binaural displays requiring fewer computational resources.


Author(s):  
Lore Thaler ◽  
Liam J. Norman

AbstractWhat factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35–40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.


2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Jennifer Kim Bizley

The location of sounds can be described in multiple coordinate systems that are defined relative to ourselves, or the world around us. World-centered hearing is critical for stable understanding of sound scenes, yet it is unclear whether this ability is unique to human listeners or generalizes to other species. Here, we establish novel behavioral tests to determine the coordinate systems in which non-human listeners (ferrets) can localize sounds. We found that ferrets could learn to discriminate sounds using either world-centered or head-centered sound location, as evidenced by their ability to discriminate locations in one space across wide variations in sound location in the alternative coordinate system. Using infrequent probe sounds to assess broader generalization of spatial hearing, we demonstrated that in both head and world-centered localization, animals used continuous maps of auditory space to guide behavior. Single trial responses of individual animals were sufficiently informative that we could then model sound localization using speaker position in specific coordinate systems and accurately predict ferrets' actions in held-out data. Our results demonstrate that non-human listeners can thus localize sounds in multiple spaces, including those defined by the world that require abstraction across traditional, head-centered sound localization cues.


2021 ◽  
pp. 1357034X2110243
Author(s):  
Jacob Kingsbury Downs

In this article, I develop and redirect Julian Henriques’s model of sonic dominance through examination of accounts of acoustic violence and torture involving headphones. Specifically, I show how auditory experience has been weaponized as an intracorporeal phenomenon, with headphones effecting a sense of sounds invading the interior phenomenological space of the head. By analysing reported cases of sonic violence and torture involving headphones through a composite theoretical lens drawn from the fields of music, sound and body studies, I argue that in saturating the head’s perceived interior with sound, perpetrators of violence perform sonic dominance across two interrelated levels: the subjugation of interiorized auditory space via the notion of flooding, in which attention is directed towards the experience of the body as a vessel for sound; and the resulting manipulation of phenomenological head–mind linkages, with emphasis on the head as a ‘space’ for both sound and thought.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251827
Author(s):  
David Mark Watson ◽  
Michael A. Akeroyd ◽  
Neil W. Roach ◽  
Ben S. Webb

In dynamic multisensory environments, the perceptual system corrects for discrepancies arising between modalities. For instance, in the ventriloquism aftereffect (VAE), spatial disparities introduced between visual and auditory stimuli lead to a perceptual recalibration of auditory space. Previous research has shown that the VAE is underpinned by multiple recalibration mechanisms tuned to different timescales, however it remains unclear whether these mechanisms use common or distinct spatial reference frames. Here we asked whether the VAE operates in eye- or head-centred reference frames across a range of adaptation timescales, from a few seconds to a few minutes. We developed a novel paradigm for selectively manipulating the contribution of eye- versus head-centred visual signals to the VAE by manipulating auditory locations relative to either the head orientation or the point of fixation. Consistent with previous research, we found both eye- and head-centred frames contributed to the VAE across all timescales. However, we found no evidence for an interaction between spatial reference frames and adaptation duration. Our results indicate that the VAE is underpinned by multiple spatial reference frames that are similarly leveraged by the underlying time-sensitive mechanisms.


2021 ◽  
Author(s):  
Peter Loksa ◽  
Norbert Kopco

Background: Ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audiovisual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered, HC, vs. eye-centered, EC). Experimental studies examining the RF of VAE found inconsistent results: a mixture of HC and EC RFs was observed for VAE induced in the central region, while a predominantly HC RF was observed in the periphery. Here, a computational model examines these inconsistencies, as well as a newly observed EC adaptation induced by AV-aligned audiovisual stimuli. Methods: The model has two versions, each containing two additively combined components: a saccade-related component characterizing the adaptation in auditory-saccade responses, and auditory space representation adapted by ventriloquism signals either in the HC RF (HC version) or in a combination of HC and EC RFs (HEC version). Results: The HEC model performed better than the HC model in the main simulation considering all the data, while the HC model was more appropriate when only the AV-aligned adaptation data were simulated. Conclusion: Visual signals in a uniform mixed HC+EC RF are likely used to calibrate the auditory spatial representation, even after the EC-referenced auditory-saccade adaptation is accounted for.


2021 ◽  
Vol 25 ◽  
pp. 233121652110453
Author(s):  
Z. Ellen Peng ◽  
Ruth Y. Litovsky

In complex listening environments, children can benefit from auditory spatial cues to understand speech in noise. When a spatial separation is introduced between the target and masker and/or listening with two ears versus one ear, children can gain intelligibility benefits with access to one or more auditory cues for unmasking: monaural head shadow, binaural redundancy, and interaural differences. This study systematically quantified the contribution of individual auditory cues in providing binaural speech intelligibility benefits for children with normal hearing between 6 and 15 years old. In virtual auditory space, target speech was presented from  + 90° azimuth (i.e., listener's right), and two-talker babble maskers were either co-located (+ 90° azimuth) or separated by 180° (–90° azimuth, listener's left). Testing was conducted over headphones in monaural (i.e., right ear) or binaural (i.e., both ears) conditions. Results showed continuous improvement of speech reception threshold (SRT) between 6 and 15 years old and immature performance at 15 years of age for both SRTs and intelligibility benefits from more than one auditory cue. With early maturation of head shadow, the prolonged maturation of unmasking was likely driven by children's poorer ability to gain full benefits from interaural difference cues. In addition, children demonstrated a trade-off between the benefits from head shadow versus interaural differences, suggesting an important aspect of individual differences in accessing auditory cues for binaural intelligibility benefits during development.


Sign in / Sign up

Export Citation Format

Share Document