sound location
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 10)

H-INDEX

23
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Katherine C Wood ◽  
Katarina C Poole ◽  
Jennifer Kim Bizley

A central question in auditory neuroscience is how far brain regions are functionally specialized for processing specific sound features such as sound location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the causal contribution of auditory cortex to hearing in multiple contexts. Here we tested the role of auditory cortex in both spatial and non-spatial hearing. We reversibly inactivated the border between middle and posterior ectosylvian gyrus using cooling (n = 2) or optogenetics (n=1) as ferrets discriminated vowel sounds in clean and noisy conditions. Animals with cooling loops were then retrained to localize noise-bursts from multiple locations and retested with cooling. In both ferrets, cooling impaired sound localization and vowel discrimination in noise, but not discrimination in clean conditions. We also tested the effects of cooling on vowel discrimination in noise when vowel and noise were colocated or spatially separated. Here, cooling exaggerated deficits discriminating vowels with colocalized noise, resulting in increased performance benefits from spatial separation of sounds and thus stronger spatial release from masking during cortical inactivation. Together our results show that auditory cortex contributes to both spatial and non-spatial hearing, consistent with single unit recordings in the same brain region. The deficits we observed did not reflect general impairments in hearing, but rather account for performance in more realistic behaviors that require use of information about both sound location and identity.


2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Jennifer Kim Bizley

The location of sounds can be described in multiple coordinate systems that are defined relative to ourselves, or the world around us. World-centered hearing is critical for stable understanding of sound scenes, yet it is unclear whether this ability is unique to human listeners or generalizes to other species. Here, we establish novel behavioral tests to determine the coordinate systems in which non-human listeners (ferrets) can localize sounds. We found that ferrets could learn to discriminate sounds using either world-centered or head-centered sound location, as evidenced by their ability to discriminate locations in one space across wide variations in sound location in the alternative coordinate system. Using infrequent probe sounds to assess broader generalization of spatial hearing, we demonstrated that in both head and world-centered localization, animals used continuous maps of auditory space to guide behavior. Single trial responses of individual animals were sufficiently informative that we could then model sound localization using speaker position in specific coordinate systems and accurately predict ferrets' actions in held-out data. Our results demonstrate that non-human listeners can thus localize sounds in multiple spaces, including those defined by the world that require abstraction across traditional, head-centered sound localization cues.


Author(s):  
V. N. Skakunov ◽  
L. V. Zhoga ◽  
S. E. Terekhov ◽  
V. U. Barhatov

The methods of sound location and search algorithms for sound sources by the acoustic system of an autonomous mobile robot are considered. The implementation scheme of the embedded speaker system is proposed, the results of experimental studies are presented.


2020 ◽  
Vol 148 (1) ◽  
pp. EL14-EL19
Author(s):  
Xiaoli Zhong ◽  
Zihui Yang ◽  
Shengfeng Yu ◽  
Hao Song ◽  
Zhenghui Gu

2020 ◽  
Vol 31 (03) ◽  
pp. 195-208 ◽  
Author(s):  
Erica E. Bennett ◽  
Ruth Y. Litovsky

AbstractSpatial hearing abilities in children with bilateral cochlear implants (BiCIs) are typically improved when two implants are used compared with a single implant. However, even with BiCIs, spatial hearing is still worse compared to normal-hearing (NH) age-matched children. Here, we focused on children who were younger than three years, hence in their toddler years. Prior research with this age focused on measuring discrimination of sounds from the right versus left.This study measured both discrimination and sound location identification in a nine-alternative forced-choice paradigm using the “reaching for sound” method, whereby children reached for sounding objects as a means of capturing their spatial hearing abilities.Discrimination was measured with sounds randomly presented to the left versus right, and loudspeakers at fixed angles ranging from ±60° to ±15°. On a separate task, sound location identification was measured for locations ranging from ±60° in 15° increments.Thirteen children with BiCIs (27–42 months old) and fifteen age-matched (NH).Discrimination and sound localization were completed for all subjects. For the left–right discrimination task, participants were required to reach a criterion of 4/5 correct trials (80%) at each angular separation prior to beginning the localization task. For sound localization, data was analyzed in two ways. First, percent correct scores were tallied for each participant. Second, for each participant, the root-mean-square-error was calculated to determine the average distance between the response and stimulus, indicative of localization accuracy.All BiCI users were able to discriminate left versus right at angles as small as ±15° when listening with two implants; however, performance was significantly worse when listening with a single implant. All NH toddlers also had >80% correct at ±15°. Sound localization results revealed root-mean-square errors averaging 11.15° in NH toddlers. Children in the BiCI group were generally unable to identify source location on this complex task (average error 37.03°).Although some toddlers with BiCIs are able to localize sound in a manner consistent with NH toddlers, for the majority of toddlers with BiCIs, sound localization abilities are still emerging.


eNeuro ◽  
2020 ◽  
Vol 7 (2) ◽  
pp. ENEURO.0244-19.2020
Author(s):  
M. V. Beckert ◽  
B. J. Fischer ◽  
J. L. Pena

Author(s):  
V. V. Orlov ◽  
M. I. Lysyi ◽  
V. A. Sivak ◽  
D. A. Kuprienko ◽  
V. M. Kulchytcskyi ◽  
...  

2019 ◽  
Vol 30 (3) ◽  
pp. 1103-1116
Author(s):  
Kiki van der Heijden ◽  
Elia Formisano ◽  
Giancarlo Valente ◽  
Minye Zhan ◽  
Ron Kupers ◽  
...  

Abstract Auditory spatial tasks induce functional activation in the occipital—visual—cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal—auditory—cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general—independent of sound location—was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


For quite a long time, the law requirement associations have progressively used sound location framework to recognize the potential gunfire in the open spots and in the woods. The killing of creature or natural life in timberland utilizing the weapon is expanding step by step; to identify the gunfire in the woods the sound identification framework has been proposed. The proposed frameworks have developed from straightforward receiver arrangements used to appraise area of gunfire inside insignificant feet of its genuine event. Moreover, however fundamental structures require little as far as programming or designing knowledge. At long last, the framework will be helpful to defeat the few issues like recognizing from which bearing the shot occurred and at what separation the discharge sound has happened. This examination work will assume a functioning job in the public arena now and later on, in extra applications also.


For quite a long time, the law requirement associations have progressively used sound location framework to recognize the potential gunfire in the open spots and in the woods. The killing of creature or natural life in timberland utilizing the weapon is expanding step by step; to identify the gunfire in the woods the sound identification framework has been proposed. The proposed frameworks have developed from straightforward receiver arrangements used to appraise area of gunfire inside insignificant feet of its genuine event. Moreover, however fundamental structures require little as far as programming or designing knowledge. At long last, the framework will be helpful to defeat the few issues like recognizing from which bearing the shot occurred and at what separation the discharge sound has happened. This examination work will assume a functioning job in the public arena now and later on, in extra applications also.


Sign in / Sign up

Export Citation Format

Share Document