Cortical Dynamics of Speech-in-Noise Processing with Noise Reduction in Hearing Aids

2020 ◽  
Author(s):  
Subong Kim
2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2020 ◽  
Vol 31 (01) ◽  
pp. 017-029
Author(s):  
Paul Reinhart ◽  
Pavel Zahorik ◽  
Pamela Souza

AbstractDigital noise reduction (DNR) processing is used in hearing aids to enhance perception in noise by classifying and suppressing the noise acoustics. However, the efficacy of DNR processing is not known under reverberant conditions where the speech-in-noise acoustics are further degraded by reverberation.The purpose of this study was to investigate acoustic and perceptual effects of DNR processing across a range of reverberant conditions for individuals with hearing impairment.This study used an experimental design to investigate the effects of varying reverberation on speech-in-noise processed with DNR.Twenty-six listeners with mild-to-moderate sensorineural hearing impairment participated in the study.Speech stimuli were combined with unmodulated broadband noise at several signal-to-noise ratios (SNRs). A range of reverberant conditions with realistic parameters were simulated, as well as an anechoic control condition without reverberation. Reverberant speech-in-noise signals were processed using a spectral subtraction DNR simulation. Signals were acoustically analyzed using a phase inversion technique to quantify improvement in SNR as a result of DNR processing. Sentence intelligibility and subjective ratings of listening effort, speech naturalness, and background noise comfort were examined with and without DNR processing across the conditions.Improvement in SNR was greatest in the anechoic control condition and decreased as the ratio of direct to reverberant energy decreased. There was no significant effect of DNR processing on speech intelligibility in the anechoic control condition, but there was a significant decrease in speech intelligibility with DNR processing in all of the reverberant conditions. Subjectively, listeners reported greater listening effort and lower speech naturalness with DNR processing in some of the reverberant conditions. Listeners reported higher background noise comfort with DNR processing only in the anechoic control condition.Results suggest that reverberation affects DNR processing using a spectral subtraction algorithm in such a way that decreases the ability of DNR to reduce noise without distorting the speech acoustics. Overall, DNR processing may be most beneficial in environments with little reverberation and that the use of DNR processing in highly reverberant environments may actually produce adverse perceptual effects. Further research is warranted using commercial hearing aids in realistic reverberant environments.


2016 ◽  
Vol 27 (03) ◽  
pp. 237-251 ◽  
Author(s):  
Susan Scollie ◽  
Charla Levy ◽  
Nazanin Pourmand ◽  
Parvaneh Abbasalipour ◽  
Marlene Bagatto ◽  
...  

Background: Although guidelines for fitting hearing aids for children are well developed and have strong basis in evidence, specific protocols for fitting and verifying some technologies are not always available. One such technology is noise management in children’s hearing aids. Children are frequently in high-level and/or noisy environments, and many options for noise management exist in modern hearing aids. Verification protocols are needed to define specific test signals and levels for use in clinical practice. Purpose: This work aims to (1) describe the variation in different brands of noise reduction processors in hearing aids and the verification of these processors and (2) determine whether these differences are perceived by 13 children who have hearing loss. Finally, we aimed to develop a verification protocol for use in pediatric clinical practice. Study Sample: A set of hearing aids was tested using both clinically available test systems and a reference system, so that the impacts of noise reduction signal processing in hearing aids could be characterized for speech in a variety of background noises. A second set of hearing aids was tested across a range of audiograms and across two clinical verification systems to characterize the variance in clinical verification measurements. Finally, a set of hearing aid recordings that varied by type of noise reduction was rated for sound quality by children with hearing loss. Results: Significant variation across makes and models of hearing aids was observed in both the speed of noise reduction activation and the magnitude of noise reduction. Reference measures indicate that noise-only testing may overestimate noise reduction magnitude compared to speech-in-noise testing. Variation across clinical test signals was also observed, indicating that some test signals may be more successful than others for characterization of hearing aid noise reduction. Children provided different sound quality ratings across hearing aids, and for one hearing aid rated the sound quality as higher with the noise reduction system activated. Conclusions: Implications for clinical verification systems may be that greater standardization and the use of speech-in-noise test signals may improve the quality and consistency of noise reduction verification cross clinics. A suggested clinical protocol for verification of noise management in children’s hearing aids is suggested.


2019 ◽  
Vol 18 (2) ◽  
pp. 98-104
Author(s):  
Hiba Ahmed El-Assal ◽  
Amani Mohamed El-Gharib ◽  
Enaas Ahmad Kolkaila ◽  
Trandil Hassan Elmahallawy

Author(s):  
Isiaka Ajewale Alimi

Digital hearing aids addresses the issues of noise and speech intelligibility that is associated with the analogue types. One of the main functions of the digital signal processor (DSP) of digital hearing aid systems is noise reduction which can be achieved by speech enhancement algorithms which in turn improve system performance and flexibility. However, studies have shown that the quality of experience (QoE) with some of the current hearing aids is not up to expectation in a noisy environment due to interfering sound, background noise and reverberation. It is also suggested that noise reduction features of the DSP can be further improved accordingly. Recently, we proposed an adaptive spectral subtraction algorithm to enhance the performance of communication systems and address the issue of associated musical noise generated by the conventional spectral subtraction algorithm. The effectiveness of the algorithm has been confirmed by different objective and subjective evaluations. In this study, an adaptive spectral subtraction algorithm is implemented using the noise-estimation algorithm for highly non-stationary noisy environments instead of the voice activity detection (VAD) employed in our previous work due to its effectiveness. Also, signal to residual spectrum ratio (SR) is implemented in order to control the amplification distortion for speech intelligibility improvement. The results show that the proposed scheme gives comparatively better performance and can be easily employed in digital hearing aid system for improving speech quality and intelligibility.


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2016 ◽  
Vol 27 (09) ◽  
pp. 732-749 ◽  
Author(s):  
Gabriel Aldaz ◽  
Sunil Puria ◽  
Larry J. Leifer

Background: Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose: To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design: An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ˜6 weeks. Study Sample: Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention: Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis: We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results: Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions: The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone.


2020 ◽  
Vol 24 (4) ◽  
pp. 180-190
Author(s):  
Hyo Jeong Kim ◽  
Jae Hee Lee ◽  
Hyun Joon Shim

Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (<i>p</i>=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (<i>p</i>=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener’s subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.


Sign in / Sign up

Export Citation Format

Share Document