spatial hearing
Recently Published Documents


TOTAL DOCUMENTS

214
(FIVE YEARS 58)

H-INDEX

25
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Chadlia Karoui ◽  
Kuzma Strelnikov ◽  
Pierre Payoux ◽  
Anne-Sophie Salabert ◽  
Chris James ◽  
...  

In asymmetric hearing loss (AHL), the normal pattern of contralateral hemispheric dominance for monaural stimulation is modified, with a shift towards the hemisphere ipsilateral to the better ear. The extent of this shift has been shown to relate to sound localisation deficits. In this study, we examined whether cochlear implantation to treat AHL can restore the normal functional pattern of auditory cortical activity and whether this relates to improved sound localisation. We recruited 10 subjects with a cochlear implant for AHL (AHL-CI) and 10 normally-hearing controls. The participants performed a voice/non-voice discrimination task with binaural and monaural presentation of the sounds, and the cortical activity was measured using positron emission tomography (PET) brain imaging with a H215O tracer. The auditory cortical activity was found to be lower in the AHL-CI participants for all of the conditions. A cortical asymmetry index was calculated and showed that a normal contralateral dominance was restored in the AHL-CI patients for the non-implanted ear, but not for the ear with the cochlear implant. It was found that the contralateral dominance for the non-implanted ear strongly correlated with sound localisation performance (rho = 0.8, p < 0.05). We conclude that the restoration of binaural mechanisms in AHL-CI subjects reverses the abnormal lateralisation pattern induced by the deafness, and that this leads to improved spatial hearing. Our results suggest that cochlear implantation fosters the rehabilitation of binaural excitatory/inhibitory cortical interactions, which could enable the reconstruction of the auditory spatial selectivity needed for sound localisation.


Author(s):  
Majid Ashrafi ◽  
Fatemeh Maharati ◽  
Sadegh Jafarzadeh Bejestani ◽  
Alireza Akbarzadeh Baghban

Background and Aim: Spatial hearing is a prerequisite for the proper function of the listener in complex auditory environments. In the present study, a Persian version of the dynamic spatial-quick speech in noise (DS-QSIN) has been developed with respect to all possible factors affecting the test and to run five lists for normal hearing subjects and assessment of reliability. Methods: To construct five new lists according to the original quick speech in noise (QSIN) test, we used frequent, familiar, and difficult words to construct unpredictable sentences. After determining the content and face validity of the sentences, 30 selected sentences were played using a DS-QSIN software for 35 subjects aged 18–25 years. The reliability of the test was assessed after repeating the test after two weeks. Results: According to expert judges, these 30 sentences showed acceptable  content  and  face validity with the changes. The average signal-to-noise ratio (SNR) loss of five lists was –5.2 dB. No significant difference was seen between men and women in all lists. The results indicate no difference in the average SNR loss between the five lists. Regarding the reliability assessment, the test-retest correlation coefficient was 0.5 to 0.7 (p<0.05). The intra-class correlation coefficient between test-retest was statistically significant (p>0.001) and confirmed that the lists have high reliability and repeatability. Conclusion: DS-QSIN test showed good validity and reliability and can be helpful in diagnosis and selecting the best method for rehabilitation of people with a spatial hearing disorder.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Stephen R. Dennison ◽  
Heath G. Jones ◽  
Alan Kan ◽  
Ruth Y. Litovsky

2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Katherine C Wood ◽  
Katarina C Poole ◽  
Jennifer Kim Bizley

A central question in auditory neuroscience is how far brain regions are functionally specialized for processing specific sound features such as sound location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the causal contribution of auditory cortex to hearing in multiple contexts. Here we tested the role of auditory cortex in both spatial and non-spatial hearing. We reversibly inactivated the border between middle and posterior ectosylvian gyrus using cooling (n = 2) or optogenetics (n=1) as ferrets discriminated vowel sounds in clean and noisy conditions. Animals with cooling loops were then retrained to localize noise-bursts from multiple locations and retested with cooling. In both ferrets, cooling impaired sound localization and vowel discrimination in noise, but not discrimination in clean conditions. We also tested the effects of cooling on vowel discrimination in noise when vowel and noise were colocated or spatially separated. Here, cooling exaggerated deficits discriminating vowels with colocalized noise, resulting in increased performance benefits from spatial separation of sounds and thus stronger spatial release from masking during cortical inactivation. Together our results show that auditory cortex contributes to both spatial and non-spatial hearing, consistent with single unit recordings in the same brain region. The deficits we observed did not reflect general impairments in hearing, but rather account for performance in more realistic behaviors that require use of information about both sound location and identity.


2021 ◽  
Vol 3 ◽  
Author(s):  
Deborah Vickers ◽  
Marina Salorio-Corbetto ◽  
Sandra Driver ◽  
Christine Rocca ◽  
Yuli Levtov ◽  
...  

Older children and teenagers with bilateral cochlear implants often have poor spatial hearing because they cannot fuse sounds from the two ears. This deficit jeopardizes speech and language development, education, and social well-being. The lack of protocols for fitting bilateral cochlear implants and resources for spatial-hearing training contribute to these difficulties. Spatial hearing develops with bilateral experience. A large body of research demonstrates that sound localisation can improve with training, underpinned by plasticity-driven changes in the auditory pathways. Generalizing training to non-trained auditory skills is best achieved by using a multi-modal (audio-visual) implementation and multi-domain training tasks (localisation, speech-in-noise, and spatial music). The goal of this work was to develop a package of virtual-reality games (BEARS, Both EARS) to train spatial hearing in young people (8–16 years) with bilateral cochlear implants using an action-research protocol. The action research protocol used formalized cycles for participants to trial aspects of the BEARS suite, reflect on their experiences, and in turn inform changes in the game implementations. This participatory design used the stakeholder participants as co-creators. The cycles for each of the three domains (localisation, spatial speech-in-noise, and spatial music) were customized to focus on the elements that the stakeholder participants considered important. The participants agreed that the final games were appropriate and ready to be used by patients. The main areas of modification were: the variety of immersive scenarios to cover age range and interests, the number of levels of complexity to ensure small improvements were measurable, feedback, and reward schemes to ensure positive reinforcement, and an additional implementation on an iPad for those who had difficulties with the headsets due to age or balance issues. The effectiveness of the BEARS training suite will be evaluated in a large-scale clinical trial to determine if using the games lead to improvements in speech-in-noise, quality of life, perceived benefit, and cost utility. Such interventions allow patients to take control of their own management reducing the reliance on outpatient-based rehabilitation. For young people, a virtual-reality implementation is more engaging than traditional rehabilitation methods, and the participatory design used here has ensured that the BEARS games are relevant.


Author(s):  
Snandan Sharma ◽  
Waldo Nogueira ◽  
A. John van Opstal ◽  
Josef Chalupper ◽  
Lucas H. M. Mens ◽  
...  

Purpose Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. Method Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). Results Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. Conclusions We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485


Author(s):  
Nicole E. Corbin ◽  
Emily Buss ◽  
Lori J. Leibold

Purpose The purpose of this study was to characterize spatial hearing abilities of children with longstanding unilateral hearing loss (UHL). UHL was expected to negatively impact children's sound source localization and masked speech recognition, particularly when the target and masker were separated in space. Spatial release from masking (SRM) in the presence of a two-talker speech masker was expected to predict functional auditory performance as assessed by parent report. Method Participants were 5- to 14-year-olds with sensorineural or mixed UHL, age-matched children with normal hearing (NH), and adults with NH. Sound source localization was assessed on the horizontal plane (−90° to 90°), with noise that was either all-pass, low-pass, high-pass, or an unpredictable mixture. Speech recognition thresholds were measured in the sound field for sentences presented in two-talker speech or speech-shaped noise. Target speech was always presented from 0°; the masker was either colocated with the target or spatially separated at ±90°. Parents of children with UHL rated their children's functional auditory performance in everyday environments via questionnaire. Results Sound source localization was poorer for children with UHL than those with NH. Children with UHL also derived less SRM than those with NH, with increased masking for some conditions. Effects of UHL were larger in the two-talker than the noise masker, and SRM in two-talker speech increased with age for both groups of children. Children with UHL whose parents reported greater functional difficulties achieved less SRM when either masker was on the side of the better-hearing ear. Conclusions Children with UHL are clearly at a disadvantage compared with children with NH for both sound source localization and masked speech recognition with spatial separation. Parents' report of their children's real-world communication abilities suggests that spatial hearing plays an important role in outcomes for children with UHL.


2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Jennifer Kim Bizley

The location of sounds can be described in multiple coordinate systems that are defined relative to ourselves, or the world around us. World-centered hearing is critical for stable understanding of sound scenes, yet it is unclear whether this ability is unique to human listeners or generalizes to other species. Here, we establish novel behavioral tests to determine the coordinate systems in which non-human listeners (ferrets) can localize sounds. We found that ferrets could learn to discriminate sounds using either world-centered or head-centered sound location, as evidenced by their ability to discriminate locations in one space across wide variations in sound location in the alternative coordinate system. Using infrequent probe sounds to assess broader generalization of spatial hearing, we demonstrated that in both head and world-centered localization, animals used continuous maps of auditory space to guide behavior. Single trial responses of individual animals were sufficiently informative that we could then model sound localization using speaker position in specific coordinate systems and accurately predict ferrets' actions in held-out data. Our results demonstrate that non-human listeners can thus localize sounds in multiple spaces, including those defined by the world that require abstraction across traditional, head-centered sound localization cues.


Sign in / Sign up

Export Citation Format

Share Document