scholarly journals Multitasking with typical use of hearing aid noise reduction in older listeners

2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.

2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2020 ◽  
Vol 24 (4) ◽  
pp. 180-190
Author(s):  
Hyo Jeong Kim ◽  
Jae Hee Lee ◽  
Hyun Joon Shim

Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (<i>p</i>=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (<i>p</i>=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener’s subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.


2018 ◽  
Vol 29 (08) ◽  
pp. 706-721 ◽  
Author(s):  
Michael Valente ◽  
Kristi Oeding ◽  
Alison Brockmeyer ◽  
Steven Smith ◽  
Dorina Kallogjeri

AbstractThe American Speech-Language-Hearing Association (ASHA) and American Academy of Audiology (AAA) have created Best Practice Guidelines for fitting hearing aids to adult patients. These guidelines recommend using real-ear measures (REM) to verify that measured output/gain of hearing aid(s) match a validated prescriptive target. Unfortunately, approximately 70–80% of audiologists do not routinely use REM when fitting hearing aids, instead relying on a manufacturer default “first-fit” setting. This is problematic because numerous studies report significant differences in REM between manufacturer first-fit and the same hearing aids using a REM or programmed-fit. These studies reported decreased prescribed gain/output in the higher frequencies for the first-fit compared with the programmed fit, which are important for recognizing speech. Currently, there is little research in peer-reviewed journals reporting if differences between hearing aids fitted using a manufacturer first-fit versus a programmed-fit result in significant differences in speech recognition in quiet, noise, and subjective outcomes.To examine if significant differences were present in monosyllabic word and phoneme recognition (consonant-nucleus-consonant; CNC) in quiet, sentence recognition in noise (Hearing in Noise Test; HINT), and subjective outcomes using the Abbreviated Profile of Hearing Aid Benefit (APHAB) and the Speech, Spatial and Qualities of Hearing (SSQ) questionnaires between hearing aids fit using one manufacturer’s first-fit and the same hearing aids with a programmed-fit using REM to National Acoustic Laboratories Nonlinear Version 2 (NAL-NL2) prescriptive target.A double-blind randomized crossover design was used. Throughout the study, one investigator performed all REM whereas a second investigator measured speech recognition in quiet, noise, and scored subjective outcome measures.Twenty-four adults with bilateral normal sloping to moderately severe sensorineural hearing loss with no prior experience with amplification.The hearing aids were fit using the proprietary manufacturer default first-fit and a programmed-fit to NAL-NL2 using real-ear insertion gain measures. The order of the two fittings was randomly assigned and counterbalanced. Participants acclimatized to each setting for four weeks and returned for assessment of performance via the revised CNC word lists, HINT, APHAB, and SSQ for the respective fitting.(1) A significant median advantage of 15% (p < 0.001; 95% CI: 9.7–24.3%) for words and 7.7% (p < 0.001; 95% CI: 5.9–10.9%) for phonemes for the programmed-fit compared with first-fit at 50 dB sound pressure level (SPL) and 4% (p < 0.01; 95% CI: 1.7–6.3%) for words at 65 dB SPL; (2) No significant differences for the HINT reception threshold for sentences (RTS); (3) A significant median advantage of 4.2% [p < 0.04; 95% confidence interval (CI): −0.6–13.2%] for the programmed-fit compared with the first-fit for the background noise subscale problem score for the APHAB; (4) No significant differences on the SSQ.Improved word and phoneme recognition for soft and words for average speech in quiet were reported for the programmed-fit. Seventy-nine percent of the participants preferred the programmed-fitting versus first-fit. Hearing aids, therefore, should be verified and programmed using REM to a prescriptive target versus no verification using a first-fit.


2013 ◽  
Vol 24 (10) ◽  
pp. 980-991 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente

Background: In the past, bilateral contralateral routing of signals (BICROS) amplification incorporated omnidirectional microphones on the transmitter and receiver sides and some models utilized noise reduction (NR) on the receiver side. Little research has examined the performance of BICROS amplification in background noise. However, previous studies examining contralateral routing of signals (CROS) amplification have reported that the presence of background noise on the transmitter side negatively affected speech recognition. Recently, NR was introduced as a feature on the receiver and transmitter sides of BICROS amplification, which has the potential to decrease the impact of noise on the wanted speech signal by decreasing unwanted noise directed to the transmitter side. Purpose: The primary goal of this study was to examine differences in the reception threshold for sentences (RTS in dB) using the Hearing in Noise Test (HINT) in a diffuse listening environment between unaided and three aided BICROS conditions (no NR, mild NR, and maximum NR) in the Tandem 16 BICROS. A secondary goal was to examine real-world subjective impressions of the Tandem 16 BICROS compared to unaided. Research Design: A randomized block repeated measures single blind design was used to assess differences between no NR, mild NR, and maximum NR listening conditions. Study Sample: Twenty-one adult participants with asymmetric sensorineural hearing loss (ASNHL) and experience with BICROS amplification were recruited from Washington University in St. Louis School of Medicine. Data Collection and Analysis: Participants were fit with the National Acoustic Laboratories’ Nonlinear version 1 prescriptive target (NAL-NL1) with the Tandem 16 BICROS at the initial visit and then verified using real-ear insertion gain (REIG) measures. Participants acclimatized to the Tandem 16 BICROS for 4 wk before returning for final testing. Participants were tested utilizing HINT sentences examining differences in RTS between unaided and three aided listening conditions. Subjective benefit was determined via the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire between the Tandem 16 BICROS and unaided. A repeated measures analysis of variance (ANOVA) was utilized to analyze the results of the HINT and APHAB. Results: Results revealed no significant differences in the RTS between unaided, no NR, mild NR, and maximum NR. Subjective impressions using the APHAB revealed statistically and clinically significant benefit with the Tandem 16 BICROS compared to unaided for the Ease of Communication (EC), Background Noise (BN), and Reverberation (RV) subscales. Conclusions: The RTS was not significantly different between unaided, no NR, mild NR, and maximum NR. None of the three aided listening conditions were significantly different from unaided performance as has been reported for previous studies examining CROS hearing aids. Further, based on comments from participants and previous research studies with conventional hearing aids, manufacturers of BICROS amplification should consider incorporating directional microphones and independent volume controls on the receiver and transmitter sides to potentially provide further improvement in signal-to-noise ratio (SNR) for patients with ASNHL.


1981 ◽  
Vol 24 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Brian E. Walden ◽  
Sue A. Erdman ◽  
Allen A. Montgomery ◽  
Daniel M. Schwartz ◽  
Robert A. Prosek

The purpose of this research was to determine some of the effects of consonant recognition training on the speech recognition performance of hearing-impaired adults. Two groups of ten subjects each received seven hours of either auditory or visual consonant recognition training, in addition to a standard two-week, group-oriented, inpatient aural rehabilitation program. A third group of fifteen subjects received the standard two-week program, but no supplementary individual consonant recognition training. An audiovisual sentence recognition test, as well as tests of auditory and visual consonant recognition, were administered both before and ibltowing training. Subjects in all three groups significantly increased in their audiovisual sentence recognition performance, but subjects receiving the individual consonant recognition training improved significantly more than subjects receiving only the standard two-week program. A significant increase in consonant recognition performance was observed in the two groups receiving the auditory or visual consonant recognition training. The data are discussed from varying statistical and clinical perspectives.


2015 ◽  
Vol 24 (3) ◽  
pp. 440-450 ◽  
Author(s):  
Jace Wolfe ◽  
Mila Morais Duke ◽  
Erin Schafer ◽  
Christine Jones ◽  
Hans E. Mülder ◽  
...  

Purpose One purpose of this study was to evaluate the improvement in speech recognition obtained with use of 2 different remote microphone technologies. Another purpose of this study was to determine whether a battery of audiometric measures could predict benefit from use of these technologies. Method Sentence recognition was evaluated while 17 adults used each of 2 different hearing aids. Performance was evaluated with and without 2 different remote microphone systems. A variety of audiologic measures were administered to determine whether prefitting assessment may predict benefit from remote microphone technology. Results Use of both remote microphone systems resulted in improvement in speech recognition in quiet and in noise. There were no differences in performance obtained with the 2 different remote microphone technologies in quiet and at low competing noise levels, but use of the digital adaptive remote microphone system provided better speech recognition in the presence of moderate- to high-level noise. The Listening in Spatialized Noise–Sentence Test Prescribed Gain Amplifier (Cameron & Dillon, 2010) measure served as a good predictor of benefit from remote microphone technology. Conclusions Each remote microphone system improved sentence recognition in noise, but greater improvement was obtained with the digital adaptive system. The Listening in Spatialized Noise–Sentence Test Prescribed Gain Amplifier may serve as a good indicator of benefit from remote microphone technology.


2021 ◽  
Author(s):  
Fatos Myftari

This thesis is concerned with noise reduction in hearing aids. Hearing - impaired listeners and hearing - impaired users have great difficulty understanding speech in a noisy background. This problem has motivated the development and the use of noise reduction algorithms to improve the speech intelligibility in hearing aids. In this thesis, two noise reduction algorithms for single channel hearing instruments are presented, evaluated using objective and subjective tests. The first noise reduction algorithm, conventional Spectral Subtraction, is simulated using MATLAB 6.5, R13. The second noise reduction algorithm, Spectral Subtraction in wavelet domanin is introduced as well. This algorithm is implemented off line, and is compared with conventional Spectral Subtraction. A subjective evaluation demonstrates that the second algorithm has additional advantages in speech intelligibility, in poor listening conditions relative to conventional Spectral Subtraction. The subjective testing was performed with normal hearing listeners, at Ryerson University. The objective evaluation shows that the Spectral Subtraction in wavelet domain has improved Signal to Noise Ratio compared to conventional Spectral Subtraction.


2019 ◽  
Vol 30 (02) ◽  
pp. 131-144 ◽  
Author(s):  
Erin M. Picou ◽  
Todd A. Ricketts

AbstractPeople with hearing loss experience difficulty understanding speech in noisy environments. Beamforming microphone arrays in hearing aids can improve the signal-to-noise ratio (SNR) and thus also speech recognition and subjective ratings. Unilateral beamformer arrays, also known as directional microphones, accomplish this improvement using two microphones in one hearing aid. Bilateral beamformer arrays, which combine information across four microphones in a bilateral fitting, further improve the SNR. Early bilateral beamformers were static with fixed attenuation patterns. Recently adaptive, bilateral beamformers have been introduced in commercial hearing aids.The purpose of this article was to evaluate the potential benefits of adaptive unilateral and bilateral beamformers for improving sentence recognition and subjective ratings in a laboratory setting. A secondary purpose was to identify potential participant factors that explain some of the variability in beamformer benefit.Participants were fitted with study hearing aids equipped with commercially available adaptive unilateral and bilateral beamformers. Participants completed sentence recognition testing in background noise using three hearing aid settings (omnidirectional, unilateral beamformer, bilateral beamformer) and two noise source configurations (surround, side). After each condition, participants made subjective ratings of their perceived work, desire to control the situation, willingness to give up, and tiredness.Eighteen adults (50–80 yr, M = 66.2, σ = 8.6) with symmetrical mild sloping to severe hearing loss participated.Sentence recognition scores and subjective ratings were analyzed separately using generalized linear models with two within-subject factors (hearing aid microphone and noise configuration). Two benefit scores were calculated: (1) unilateral beamformer benefit (relative to performance with omnidirectional) and (2) additional bilateral beamformer benefit (relative to performance with unilateral beamformer). Hierarchical multiple linear regression was used to determine if beamformer benefit was associated with participant factors (age, degree of hearing loss, unaided speech in noise ability, spatial release from masking, and performance in omnidirectional).Sentence recognition and subjective ratings of work, control, and tiredness were better with both types of beamformers relative to the omnidirectional conditions. In addition, the bilateral beamformer offered small additional improvements relative to the unilateral beamformer in terms of sentence recognition and subjective ratings of tiredness. Speech recognition performance and subjective ratings were generally independent of noise configuration. Performance in the omnidirectional setting and pure-tone average were independently related to unilateral beamformer benefits. Those with the lowest performance or the largest degree of hearing loss benefited the most. No factors were significantly related to additional bilateral beamformer benefit.Adaptive bilateral beamformers offer additional advantages over adaptive unilateral beamformers in hearing aids. The small additional advantages with the adaptive beamformer are comparable to those reported in the literature with static beamformers. Although the additional benefits are small, they positively affected subjective ratings of tiredness. These data suggest that adaptive bilateral beamformers have the potential to improve listening in difficult situations for hearing aid users. In addition, patients who struggle the most without beamforming microphones may also benefit the most from the technology.


2020 ◽  
Vol 31 (01) ◽  
pp. 017-029
Author(s):  
Paul Reinhart ◽  
Pavel Zahorik ◽  
Pamela Souza

AbstractDigital noise reduction (DNR) processing is used in hearing aids to enhance perception in noise by classifying and suppressing the noise acoustics. However, the efficacy of DNR processing is not known under reverberant conditions where the speech-in-noise acoustics are further degraded by reverberation.The purpose of this study was to investigate acoustic and perceptual effects of DNR processing across a range of reverberant conditions for individuals with hearing impairment.This study used an experimental design to investigate the effects of varying reverberation on speech-in-noise processed with DNR.Twenty-six listeners with mild-to-moderate sensorineural hearing impairment participated in the study.Speech stimuli were combined with unmodulated broadband noise at several signal-to-noise ratios (SNRs). A range of reverberant conditions with realistic parameters were simulated, as well as an anechoic control condition without reverberation. Reverberant speech-in-noise signals were processed using a spectral subtraction DNR simulation. Signals were acoustically analyzed using a phase inversion technique to quantify improvement in SNR as a result of DNR processing. Sentence intelligibility and subjective ratings of listening effort, speech naturalness, and background noise comfort were examined with and without DNR processing across the conditions.Improvement in SNR was greatest in the anechoic control condition and decreased as the ratio of direct to reverberant energy decreased. There was no significant effect of DNR processing on speech intelligibility in the anechoic control condition, but there was a significant decrease in speech intelligibility with DNR processing in all of the reverberant conditions. Subjectively, listeners reported greater listening effort and lower speech naturalness with DNR processing in some of the reverberant conditions. Listeners reported higher background noise comfort with DNR processing only in the anechoic control condition.Results suggest that reverberation affects DNR processing using a spectral subtraction algorithm in such a way that decreases the ability of DNR to reduce noise without distorting the speech acoustics. Overall, DNR processing may be most beneficial in environments with little reverberation and that the use of DNR processing in highly reverberant environments may actually produce adverse perceptual effects. Further research is warranted using commercial hearing aids in realistic reverberant environments.


2017 ◽  
Vol 28 (09) ◽  
pp. 799-809 ◽  
Author(s):  
Meredith Spratford ◽  
Hannah Hodson McLean ◽  
Ryan McCreery

AbstractAccess to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.Thirty-five children, aged 5–12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children’s use of high-frequency audibility in a manner that approximates how they learn language.


Sign in / Sign up

Export Citation Format

Share Document