Effective Noise Reduction Techniques for Hearing Aids

2021 ◽  
Author(s):  
Chippy Jose ◽  
Binu L S
Author(s):  
Isiaka Ajewale Alimi

Digital hearing aids addresses the issues of noise and speech intelligibility that is associated with the analogue types. One of the main functions of the digital signal processor (DSP) of digital hearing aid systems is noise reduction which can be achieved by speech enhancement algorithms which in turn improve system performance and flexibility. However, studies have shown that the quality of experience (QoE) with some of the current hearing aids is not up to expectation in a noisy environment due to interfering sound, background noise and reverberation. It is also suggested that noise reduction features of the DSP can be further improved accordingly. Recently, we proposed an adaptive spectral subtraction algorithm to enhance the performance of communication systems and address the issue of associated musical noise generated by the conventional spectral subtraction algorithm. The effectiveness of the algorithm has been confirmed by different objective and subjective evaluations. In this study, an adaptive spectral subtraction algorithm is implemented using the noise-estimation algorithm for highly non-stationary noisy environments instead of the voice activity detection (VAD) employed in our previous work due to its effectiveness. Also, signal to residual spectrum ratio (SR) is implemented in order to control the amplification distortion for speech intelligibility improvement. The results show that the proposed scheme gives comparatively better performance and can be easily employed in digital hearing aid system for improving speech quality and intelligibility.


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 319
Author(s):  
Chan-Rok Park ◽  
Seong-Hyeon Kang ◽  
Young-Jin Lee

Recently, the total variation (TV) algorithm has been used for noise reduction distribution in degraded nuclear medicine images. To acquire positron emission tomography (PET) to correct the attenuation region in the PET/magnetic resonance (MR) system, the MR Dixon pulse sequence, which is based on controlled aliasing in parallel imaging, results from higher acceleration (CAIPI; MR-ACDixon-CAIPI) and generalized autocalibrating partially parallel acquisition (GRAPPA; MR-ACDixon-GRAPPA) algorithms are used. Therefore, this study aimed to evaluate the image performance of the TV noise reduction algorithm for PET/MR images using the Jaszczak phantom by injecting 18F radioisotopes with PET/MR, which is called mMR (Siemens, Germany), compared with conventional noise-reduction techniques such as Wiener and median filters. The contrast-to-noise (CNR) and coefficient of variation (COV) were used for quantitative analysis. Based on the results, PET images with the TV algorithm were improved by approximately 7.6% for CNR and decreased by approximately 20.0% for COV compared with conventional noise-reduction techniques. In particular, the image quality for the MR-ACDixon-CAIPI PET image was better than that of the MR-ACDixon-GRAPPA PET image. In conclusion, the TV noise-reduction algorithm is efficient for improving the PET image quality in PET/MR systems.


Radiographics ◽  
2014 ◽  
Vol 34 (4) ◽  
pp. 849-862 ◽  
Author(s):  
Eric C. Ehman ◽  
Lifeng Yu ◽  
Armando Manduca ◽  
Amy K. Hara ◽  
Maria M. Shiung ◽  
...  

2016 ◽  
Vol 27 (09) ◽  
pp. 732-749 ◽  
Author(s):  
Gabriel Aldaz ◽  
Sunil Puria ◽  
Larry J. Leifer

Background: Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose: To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design: An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ˜6 weeks. Study Sample: Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention: Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis: We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results: Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions: The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone.


Sign in / Sign up

Export Citation Format

Share Document