scholarly journals Spatial Speech-in-Noise Performance in Bimodal and Single-Sided Deaf Cochlear Implant Users

2019 ◽  
Vol 23 ◽  
pp. 233121651985831 ◽  
Author(s):  
Ben Williges ◽  
Thomas Wesarg ◽  
Lorenz Jung ◽  
Leontien I. Geven ◽  
Andreas Radeloff ◽  
...  

This study compared spatial speech-in-noise performance in two cochlear implant (CI) patient groups: bimodal listeners, who use a hearing aid contralaterally to support their impaired acoustic hearing, and listeners with contralateral normal hearing, i.e., who were single-sided deaf before implantation. Using a laboratory setting that controls for head movements and that simulates spatial acoustic scenes, speech reception thresholds were measured for frontal speech-in-stationary noise from the front, the left, or the right side. Spatial release from masking (SRM) was then extracted from speech reception thresholds for monaural and binaural listening. SRM was found to be significantly lower in bimodal CI than in CI single-sided deaf listeners. Within each listener group, the SRM extracted from monaural listening did not differ from the SRM extracted from binaural listening. In contrast, a normal-hearing control group showed a significant improvement in SRM when using two ears in comparison to one. Neither CI group showed a binaural summation effect; that is, their performance was not improved by using two devices instead of the best monaural device in each spatial scenario. The results confirm a “listening with the better ear” strategy in the two CI patient groups, where patients benefited from using two ears/devices instead of one by selectively attending to the better one. Which one is the better ear, however, depends on the spatial scenario and on the individual configuration of hearing loss.

2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2021 ◽  
Vol 64 (2) ◽  
pp. 348-358
Author(s):  
Jing Shen

Purpose Dynamic pitch, which is defined as the variation in fundamental frequency, is an acoustic cue that aids speech perception in noise. This study examined the effects of strengthened and weakened dynamic pitch cues on older listeners' speech perception in noise, as well as how these effects were modulated by individual factors including spectral perception ability. Method The experiment measured speech reception thresholds in noise in both younger listeners with normal hearing and older listeners whose hearing status ranged from near-normal hearing to mild-to-moderate sensorineural hearing loss. The pitch contours of the target speech were manipulated to create four levels of dynamic pitch strength: weakened, original, mildly strengthened, and strengthened. Listeners' spectral perception ability was measured using tests of spectral ripple and frequency modulation discrimination. Results Both younger and older listeners performed worse with manipulated dynamic pitch cues than with original dynamic pitch. The effects of dynamic pitch on older listeners' speech recognition were associated with their age but not with their perception of spectral information. Those older listeners who were relatively younger were more negatively affected by dynamic pitch manipulations. Conclusions The findings suggest the current pitch manipulation strategy is detrimental for older listeners to perceive speech in noise, as compared to original dynamic pitch. While the influence of age on the effects of dynamic pitch is likely due to age-related declines in pitch perception, the spectral measures used in this study were not strong predictors for dynamic pitch effects. Taken together, these results indicate next steps in this line of work should be focused on how to manipulate acoustic cues in speech in order to improve speech perception in noise for older listeners.


2019 ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R Mills

Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information (“electro-haptic stimulation”; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3 %-points, with some users improving by more than 20 %-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.


2021 ◽  
Vol 25 (1) ◽  
pp. 22-26
Author(s):  
Raksha Amemane ◽  
Archana Gundmi ◽  
Kishan Madikeri Mohan

Background and Objectives: Music listening has a concomitant effect on structural and functional organization of the brain. It helps in relaxation, mind training and neural strengthening. In relation to it, the present study was aimed to find the effect of Carnatic music listening training (MLT) on speech in noise performance in adults.Subjects and Methods: A total of 28 participants (40-70 years) were recruited in the study. Based on randomized control trial, they were divided into intervention and control group. Intervention group underwent a short-term MLT. Quick Speech-in-Noise in Kannada was used as an outcome measure.Results: Results were analysed using mixed method analysis of variance (ANOVA) and repeated measures ANOVA. There was a significant difference between intervention and control group post MLT. The results of the second continuum revealed no statistically significant difference between post training and follow-up scores in both the groups.Conclusions: In conclusion short-term MLT resulted in betterment of speech in noise performance. MLT can be hence used as a viable tool in formal auditory training for better prognosis.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


2019 ◽  
Vol 23 ◽  
pp. 233121651983149 ◽  
Author(s):  
Wendy B. Potts ◽  
Lakshmish Ramanna ◽  
Trevor Perry ◽  
Christopher J. Long

This study looked at different methods to preserve interaural level difference (ILD) cues for bilateral cochlear implant (BiCI) recipients. One possible distortion to ILD is from automatic gain control (AGC). Localization accuracy of BiCI recipients using default versus increased AGC threshold and linked AGCs versus independent AGCs was examined. In addition, speech reception in noise was assessed using linked versus independent AGCs and enabling and disabling Autosensitivity™ Control. Subjective information via a diary and questionnaire was also collected about maps with linked and independent AGCs during a take-home experience. Localization accuracy improved in the increased AGC threshold and the linked AGCs conditions. Increasing the AGC threshold resulted in a 4° improvement in root mean square error averaged across all speaker locations. Using linked AGCs, BiCI participants experienced an 8° improvement for all speaker locations and a 19° improvement at the speaker location most affected by the AGC. Speech reception threshold in noise improved by an average of 2.5 dB when using linked AGCs versus independent AGCs. In addition, the effect of linked AGCs on speech in noise was compared with that of Autosensitivity™ Control. The Speech, Spatial, and Qualities of Hearing Scale-12 question comparative survey showed an improvement when using maps with linked AGCs. These findings support the hypothesis that ILD cues may be preserved by increasing the AGC threshold or linking AGCs.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R. Mills

2013 ◽  
Vol 24 (09) ◽  
pp. 879-890 ◽  
Author(s):  
David Morris ◽  
Lennart Magnusson ◽  
Andrew Faulkner ◽  
Radoslava Jönsson ◽  
Holger Juul

Background: The accurate perception of prosody assists a listener in deriving meaning from natural speech. Few studies have addressed the ability of cochlear implant (CI) listeners to perceive the brief duration prosodic cues involved in contrastive vowel length, word stress, and compound word and phrase identification. Purpose: To compare performance in the perception of brief duration prosodic contrasts by CI participants and a control group of normal hearing participants. This study investigated the ability to perceive these cues in quiet and noise conditions, and to identify auditory perceptual factors that might predict prosodic perception in the CI group. Prosodic perception was studied both in noise and quiet because noise is a pervasive feature of everyday environments. Research Design: A quasi-experimental correlation design was employed. Study Sample: Twenty-one CI recipients participated along with a control group of 10 normal hearing participants. All CI participants were unilaterally implanted adults who had considerable experience with oral language prior to implantation. Data Collection and Analysis: Speech identification testing measured the participants' ability to identify word stress, vowel length, and compound words or phrases all of which were presented with minimal-pair response choices. Tests were performed in quiet and in speech-spectrum shaped noise at a 10 dB signal-to-noise ratio. Also, discrimination thresholds for four acoustic properties of a synthetic vowel were measured as possible predictors of prosodic perception. Testing was carried out during one session, and participants used their clinically assigned speech processors. Results: The CI group could not identify brief prosodic cues as well as the control group, and their performance decreased significantly in the noise condition. Regression analysis showed that the discrimination of intensity predicted performance on the prosodic tasks. The performance decline measured with the older participants meant that age also emerged as a predictor. Conclusions: This study provides a portrayal of CI recipients' ability to perceive brief prosodic cues. This is of interest in the preparation of rehabilitation materials used in training and in developing realistic expectations for potential CI candidates.


2018 ◽  
Vol 27 (1) ◽  
pp. 95-103
Author(s):  
Adriana Goyette ◽  
Jeff Crukley ◽  
Jason Galster

Purpose Directional microphone systems are typically used to improve hearing aid users' understanding of speech in noise. However, directional microphones also increase internal hearing aid noise. The purpose of this study was to investigate how varying directional microphone bandwidth affected listening preference and speech-in-noise performance. Method Ten participants with normal hearing and 10 participants with hearing impairment compared internal noise levels between hearing aid memories with 4 different microphone modes: omnidirectional, full directional, high-frequency directionality with directional processing above 900 Hz, and high-frequency directionality with directional processing above 2000 Hz. Speech-in-noise performance was measured with each memory for the participants with hearing impairment. Results Participants with normal hearing preferred memories with less directional bandwidth. Participants with hearing impairment also tended to prefer the memories with less directional bandwidth. However, the majority of participants with hearing impairment did not indicate a preference between omnidirectional and directional above 2000 Hz memories. Average hearing-in-noise performance improved with increasing directional bandwidth. Conclusions Most participants preferred memories with less directional bandwidth in quiet. Participants with hearing impairment indicated no difference in preference between directional above 2000 Hz and the omnidirectional memories. Speech recognition in noise performance improved with increasing directional bandwidth.


1977 ◽  
Vol 42 (1) ◽  
pp. 60-64 ◽  
Author(s):  
Randall C. Beattie ◽  
Brad J. Edgerton ◽  
Dion V. Svihovec

Articulation functions were generated on a normal-hearing population with the Auditec of St. Louis cassette recordings of the NU-6 and CID W-22 speech discrimination tests. Both tests were similar and yielded slopes of about 4.4%/dB. Each gave a speech discrimination score of approximately 95% at 32 dB SL. Speech reception thresholds were obtained with monitored live voice and yielded good testretest consistency. Speech thresholds were about 9 dB better than the ANSI (1969) specifications.


Sign in / Sign up

Export Citation Format

Share Document