scholarly journals Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield

Author(s):  
M. Torben Pastore ◽  
Kathryn R. Pulling ◽  
Chen Chen ◽  
William A. Yost ◽  
Michael F. Dorman

Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412

Ear & Hearing ◽  
2020 ◽  
Vol 41 (6) ◽  
pp. 1660-1674
Author(s):  
M. Torben Pastore ◽  
Sarah J. Natale ◽  
Colton Clayton ◽  
Michael F. Dorman ◽  
William A. Yost ◽  
...  

Author(s):  
Seung Seob Yeom ◽  
Jong Suk Choi ◽  
Yoon Seob Lim ◽  
Min-young Park ◽  
Munsang Kim

2014 ◽  
Vol 35 (6) ◽  
pp. 633-640 ◽  
Author(s):  
Michael F. Dorman ◽  
Louise Loiselle ◽  
Josh Stohl ◽  
William A. Yost ◽  
Anthony Spahr ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 532
Author(s):  
Henglin Pu ◽  
Chao Cai ◽  
Menglan Hu ◽  
Tianping Deng ◽  
Rong Zheng ◽  
...  

Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.


Sign in / Sign up

Export Citation Format

Share Document