Evaluation of Automatic Directional Processing with Cochlear Implant Recipients

2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.

2012 ◽  
Vol 23 (05) ◽  
pp. 302-312 ◽  
Author(s):  
Jacquelyn Baudhuin ◽  
Jamie Cadieux ◽  
Jill B. Firszt ◽  
Ruth M. Reeder ◽  
Jerrica L. Maxson

Background: Cochlear implants provide access to soft intensity sounds and therefore improved audibility for children with severe-to-profound hearing loss. Speech processor programming parameters, such as threshold (or T-level), input dynamic range (IDR), and microphone sensitivity, contribute to the recipient's program and influence audibility. When soundfield thresholds obtained through the speech processor are elevated, programming parameters can be modified to improve soft sound detection. Adult recipients show improved detection for low-level sounds when T-levels are set at raised levels and show better speech understanding in quiet when wider IDRs are used. Little is known about the effects of parameter settings on detection and speech recognition in children using today's cochlear implant technology. Purpose: The overall study aim was to assess optimal T-level, IDR, and sensitivity settings in pediatric recipients of the Advanced Bionics cochlear implant. Research Design: Two experiments were conducted. Experiment 1 examined the effects of two T-level settings on soundfield thresholds and detection of the Ling 6 sounds. One program set T-levels at 10% of most comfortable levels (M-levels) and another at 10 current units (CUs) below the level judged as “soft.” Experiment 2 examined the effects of IDR and sensitivity settings on speech recognition in quiet and noise. Study Sample: Participants were 11 children 7–17 yr of age (mean 11.3) implanted with the Advanced Bionics High Resolution 90K or CII cochlear implant system who had speech recognition scores of 20% or greater on a monosyllabic word test. Data Collection and Analysis: Two T-level programs were compared for detection of the Ling sounds and frequency modulated (FM) tones. Differing IDR/sensitivity programs (50/0, 50/10, 70/0, 70/10) were compared using Ling and FM tone detection thresholds, CNC (consonant-vowel nucleus-consonant) words at 50 dB SPL, and Hearing in Noise Test for Children (HINT-C) sentences at 65 dB SPL in the presence of four-talker babble (+8 signal-to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). Results: T-levels set 10 CUs below “soft” resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. Conclusions: In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children.


2020 ◽  
Vol 24 ◽  
pp. 233121652094897
Author(s):  
Dimitar Spirrov ◽  
Eugen Kludt ◽  
Eline Verschueren ◽  
Andreas Büchner ◽  
Tom Francart

Automatic gain control (AGC) compresses the wide dynamic range of sounds to the narrow dynamic range of hearing-impaired listeners. Setting AGC parameters (time constants and knee points) is an important part of the fitting of hearing devices. These parameters do not only influence overall loudness elicited by the hearing devices but can also affect the recognition of speech in noise. We investigated whether matching knee points and time constants of the AGC between the cochlear implant and the hearing aid of bimodal listeners would improve speech recognition in noise. We recruited 18 bimodal listeners and provided them all with the same cochlear-implant processor and hearing aid. We compared the matched AGCs with the default device settings with mismatched AGCs. As a baseline, we also included a condition with the mismatched AGCs of the participants’ own devices. We tested speech recognition in quiet and in noise presented from different directions. The time constants affected outcomes in the monaural testing condition with the cochlear implant alone. There were no specific binaural performance differences between the two AGC settings. Therefore, the performance was mostly dependent on the monaural cochlear implant alone condition.


2016 ◽  
Vol 21 (Suppl. 1) ◽  
pp. 48-54 ◽  
Author(s):  
Feike de Graaff ◽  
Elke Huysmans ◽  
Obaid ur Rehman Qazi ◽  
Filiep J. Vanpoucke ◽  
Paul Merkus ◽  
...  

The number of cochlear implant (CI) users is increasing annually, resulting in an increase in the workload of implant centers in ongoing patient management and evaluation. Remote testing of speech recognition could be time-saving for both the implant centers as well as the patient. This study addresses two methodological challenges we encountered in the development of a remote speech recognition tool for adult CI users. First, we examined whether speech recognition in noise performance differed when the steady-state masking noise was presented throughout the test (i.e. continuous) instead of the standard clinical use for evaluation where the masking noise stops after each stimulus (i.e. discontinuous). A direct coupling between the audio port of a tablet computer to the accessory input of the sound processor with a personal audio cable was used. The setup was calibrated to facilitate presentation of stimuli at a predefined sound level. Finally, differences in frequency response between the audio cable and microphones were investigated.


2019 ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R Mills

Cochlear implant (CI) users receive only limited sound information through their implant, which means that they struggle to understand speech in noisy environments. Recent work has suggested that combining the electrical signal from the CI with a haptic signal that provides crucial missing sound information (“electro-haptic stimulation”; EHS) could improve speech-in-noise performance. The aim of the current study was to test whether EHS could enhance speech-in-noise performance in CI users using: (1) a tactile signal derived using an algorithm that could be applied in real time, (2) a stimulation site appropriate for a real-world application, and (3) a tactile signal that could readily be produced by a compact, portable device. We measured speech intelligibility in multi-talker noise with and without vibro-tactile stimulation of the wrist in CI users, before and after a short training regime. No effect of EHS was found before training, but after training EHS was found to improve the number of words correctly identified by an average of 8.3 %-points, with some users improving by more than 20 %-points. Our approach could offer an inexpensive and non-invasive means of improving speech-in-noise performance in CI users.


Author(s):  
Bruna S. Mussoi

Purpose Music training has been proposed as a possible tool for auditory training in older adults, as it may improve both auditory and cognitive skills. However, the evidence to support such benefits is mixed. The goal of this study was to determine the differential effects of lifelong musical training and working memory on speech recognition in noise, in older adults. Method A total of 31 musicians and nonmusicians aged 65–78 years took part in this cross-sectional study. Participants had a normal pure-tone average, with most having high-frequency hearing loss. Working memory (memory capacity) was assessed with the backward Digit Span test, and speech recognition in noise was assessed with three clinical tests (Quick Speech in Noise, Hearing in Noise Test, and Revised Speech Perception in Noise). Results Findings from this sample of older adults indicate that neither music training nor working memory was associated with differences on the speech recognition in noise measures used in this study. Similarly, duration of music training was not associated with speech-in-noise recognition. Conclusions Results from this study do not support the hypothesis that lifelong music training benefits speech recognition in noise. Similarly, an effect of working memory (memory capacity) was not apparent. While these findings may be related to the relatively small sample size, results across previous studies that investigated these effects have also been mixed. Prospective randomized music training studies may be able to better control for variability in outcomes associated with pre-existing and music training factors, as well as to examine the differential impact of music training and working memory for speech-in-noise recognition in older adults.


2019 ◽  
Vol 23 ◽  
pp. 233121651985831 ◽  
Author(s):  
Ben Williges ◽  
Thomas Wesarg ◽  
Lorenz Jung ◽  
Leontien I. Geven ◽  
Andreas Radeloff ◽  
...  

This study compared spatial speech-in-noise performance in two cochlear implant (CI) patient groups: bimodal listeners, who use a hearing aid contralaterally to support their impaired acoustic hearing, and listeners with contralateral normal hearing, i.e., who were single-sided deaf before implantation. Using a laboratory setting that controls for head movements and that simulates spatial acoustic scenes, speech reception thresholds were measured for frontal speech-in-stationary noise from the front, the left, or the right side. Spatial release from masking (SRM) was then extracted from speech reception thresholds for monaural and binaural listening. SRM was found to be significantly lower in bimodal CI than in CI single-sided deaf listeners. Within each listener group, the SRM extracted from monaural listening did not differ from the SRM extracted from binaural listening. In contrast, a normal-hearing control group showed a significant improvement in SRM when using two ears in comparison to one. Neither CI group showed a binaural summation effect; that is, their performance was not improved by using two devices instead of the best monaural device in each spatial scenario. The results confirm a “listening with the better ear” strategy in the two CI patient groups, where patients benefited from using two ears/devices instead of one by selectively attending to the better one. Which one is the better ear, however, depends on the spatial scenario and on the individual configuration of hearing loss.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Mark D. Fletcher ◽  
Amatullah Hadeedi ◽  
Tobias Goehring ◽  
Sean R. Mills

2004 ◽  
Vol 115 (4) ◽  
pp. 1729-1735 ◽  
Author(s):  
Christopher W. Turner ◽  
Bruce J. Gantz ◽  
Corina Vidal ◽  
Amy Behrens ◽  
Belinda A. Henry

Sign in / Sign up

Export Citation Format

Share Document