scholarly journals Sentence Recognition in Noise and Perceived Benefit of Noise Reduction on the Receiver and Transmitter Sides of a BICROS Hearing Aid

2013 ◽  
Vol 24 (10) ◽  
pp. 980-991 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente

Background: In the past, bilateral contralateral routing of signals (BICROS) amplification incorporated omnidirectional microphones on the transmitter and receiver sides and some models utilized noise reduction (NR) on the receiver side. Little research has examined the performance of BICROS amplification in background noise. However, previous studies examining contralateral routing of signals (CROS) amplification have reported that the presence of background noise on the transmitter side negatively affected speech recognition. Recently, NR was introduced as a feature on the receiver and transmitter sides of BICROS amplification, which has the potential to decrease the impact of noise on the wanted speech signal by decreasing unwanted noise directed to the transmitter side. Purpose: The primary goal of this study was to examine differences in the reception threshold for sentences (RTS in dB) using the Hearing in Noise Test (HINT) in a diffuse listening environment between unaided and three aided BICROS conditions (no NR, mild NR, and maximum NR) in the Tandem 16 BICROS. A secondary goal was to examine real-world subjective impressions of the Tandem 16 BICROS compared to unaided. Research Design: A randomized block repeated measures single blind design was used to assess differences between no NR, mild NR, and maximum NR listening conditions. Study Sample: Twenty-one adult participants with asymmetric sensorineural hearing loss (ASNHL) and experience with BICROS amplification were recruited from Washington University in St. Louis School of Medicine. Data Collection and Analysis: Participants were fit with the National Acoustic Laboratories’ Nonlinear version 1 prescriptive target (NAL-NL1) with the Tandem 16 BICROS at the initial visit and then verified using real-ear insertion gain (REIG) measures. Participants acclimatized to the Tandem 16 BICROS for 4 wk before returning for final testing. Participants were tested utilizing HINT sentences examining differences in RTS between unaided and three aided listening conditions. Subjective benefit was determined via the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire between the Tandem 16 BICROS and unaided. A repeated measures analysis of variance (ANOVA) was utilized to analyze the results of the HINT and APHAB. Results: Results revealed no significant differences in the RTS between unaided, no NR, mild NR, and maximum NR. Subjective impressions using the APHAB revealed statistically and clinically significant benefit with the Tandem 16 BICROS compared to unaided for the Ease of Communication (EC), Background Noise (BN), and Reverberation (RV) subscales. Conclusions: The RTS was not significantly different between unaided, no NR, mild NR, and maximum NR. None of the three aided listening conditions were significantly different from unaided performance as has been reported for previous studies examining CROS hearing aids. Further, based on comments from participants and previous research studies with conventional hearing aids, manufacturers of BICROS amplification should consider incorporating directional microphones and independent volume controls on the receiver and transmitter sides to potentially provide further improvement in signal-to-noise ratio (SNR) for patients with ASNHL.

2013 ◽  
Vol 24 (08) ◽  
pp. 649-659 ◽  
Author(s):  
Kristy Jones Lowery ◽  
Patrick N. Plyler

Background: Directional microphones (D-Mics) and digital noise reduction (DNR) algorithms are used in hearing aids to reduce the negative effects of background noise on performance. Directional microphones attenuate sounds arriving from anywhere other than the front of the listener while DNR attenuates sounds with physical characteristics of noise. Although both noise reduction technologies are currently available in hearing aids, it is unclear if the use of these technologies in isolation or together affects acceptance of noise and/or preference for the end user when used in various types of background noise. Purpose: The purpose of the research was to determine the effects of D-Mic, DNR, or the combination of D-Mic and DNR on acceptance of noise and preference when listening in various types of background noise. Research Design: An experimental study in which subjects were exposed to a repeated measures design was utilized. Study Sample: Thirty adult listeners with mild sloping to moderately severe sensorineural hearing loss participated (mean age 67 yr). Data Collection and Analysis: Acceptable noise levels (ANLs) were obtained using no noise reduction technologies, D-Mic only, DNR only, and the combination of the two technologies (Combo) for three different background noises (single-talker speech, speech-shaped noise, and multitalker babble) for each listener. In addition, preference rankings of the noise reduction technologies were obtained within each background noise (1 = best, 3 = worst). Results: ANL values were significantly better for each noise reduction technology than baseline; and benefit increased significantly from DNR to D-Mic to Combo. Listeners with higher (worse) baseline ANLs received more benefit from noise reduction technologies than listeners with lower (better) baseline ANLs. Neither ANL values nor ANL benefit values were significantly affected by background noise type; however, ANL benefit with D-Mic and Combo was similar when speech-like noise was present while ANL benefit was greatest for Combo when speech spectrum noise was present. Listeners preferred the hearing aid settings that resulted in the best ANL value. Conclusion: Noise reduction technologies improved ANL for each noise type, and the amount of improvement was related to the baseline ANL value. Improving an ANL with noise reduction technologies is noticeable to listeners, at least when examined in this laboratory setting, and listeners prefer noise reduction technologies that improved their ability to accept noise.


2012 ◽  
Vol 23 (08) ◽  
pp. 606-615 ◽  
Author(s):  
HaiHong Liu ◽  
Hua Zhang ◽  
Ruth A. Bentler ◽  
Demin Han ◽  
Luo Zhang

Background: Transient noise can be disruptive for people wearing hearing aids. Ideally, the transient noise should be detected and controlled by the signal processor without disrupting speech and other intended input signals. A technology for detecting and controlling transient noises in hearing aids was evaluated in this study. Purpose: The purpose of this study was to evaluate the effectiveness of a transient noise reduction strategy on various transient noises and to determine whether the strategy has a negative impact on sound quality of intended speech inputs. Research Design: This was a quasi-experimental study. The study involved 24 hearing aid users. Each participant was asked to rate the parameters of speech clarity, transient noise loudness, and overall impression for speech stimuli under the algorithm-on and algorithm-off conditions. During the evaluation, three types of stimuli were used: transient noises, speech, and background noises. The transient noises included “knife on a ceramic board,” “mug on a tabletop,” “office door slamming,” “car door slamming,” and “pen tapping on countertop.” The speech sentences used for the test were presented by a male speaker in Mandarin. The background noises included “party noise” and “traffic noise.” All of these sounds were combined into five listening situations: (1) speech only, (2) transient noise only, (3) speech and transient noise, (4) background noise and transient noise, and (5) speech and background noise and transient noise. Results: There was no significant difference on the ratings of speech clarity between the algorithm-on and algorithm-off (t-test, p = 0.103). Further analysis revealed that speech clarity was significant better at 70 dB SLP than 55 dB SPL (p < 0.001). For transient noise loudness: under the algorithm-off condition, the percentages of subjects rating the transient noise to be somewhat soft, appropriate, somewhat loud, and too loud were 0.2, 47.1, 29.6, and 23.1%, respectively. The corresponding percentages under the algorithm-on were 3.0, 72.6, 22.9, and 1.4%, respectively. A significant difference on the ratings of the transient noise loudness was found between the algorithm-on and algorithm-off (t-test, p < 0.001). For overall impression for speech stimuli: under the algorithm-off condition, the percentage of subjects rating the algorithm to be not helpful at all, somewhat helpful, helpful, and very helpful for speech stimuli were 36.5, 20.8, 33.9, and 8.9%, respectively. Under the algorithm-on condition, the corresponding percentages were 35.0, 19.3, 30.7, and 15.0%, respectively. Statistical analysis revealed there was a significant difference on the ratings of overall impression on speech stimuli. The ratings under the algorithm-on condition were significantly more helpful for speech understanding than the ratings under algorithm-off (t-test, p < 0.001). Conclusions: The transient noise reduction strategy appropriately controlled the loudness for most of the transient noises and did not affect the sound quality, which could be beneficial to hearing aid wearers.


2012 ◽  
Vol 23 (03) ◽  
pp. 171-181 ◽  
Author(s):  
Rachel A. McArdle ◽  
Mead Killion ◽  
Monica A. Mennite ◽  
Theresa H. Chisolm

Background: The decision to fit one or two hearing aids in individuals with binaural hearing loss has been debated for years. Although some 78% of U.S. hearing aid fittings are binaural (Kochkin , 2010), Walden and Walden (2005) presented data showing that 82% (23 of 28 patients) of their sample obtained significantly better speech recognition in noise scores when wearing one hearing aid as opposed to two. Purpose: To conduct two new experiments to fuel the monaural/binaural debate. The first experiment was a replication of Walden and Walden (2005), whereas the second experiment examined the use of binaural cues to improve speech recognition in noise. Research Design: A repeated measures experimental design. Study Sample: Twenty veterans (aged 59–85 yr), with mild to moderately severe binaurally symmetrical hearing loss who wore binaural hearing aids were recruited from the Audiology Department at the Bay Pines VA Healthcare System. Data Collection and Analysis: Experiment 1 followed the procedures of the Walden and Walden study, where signal-to-noise ratio (SNR) loss was measured using the Quick Speech-in-Noise (QuickSIN) test on participants who were aided with their current hearing aids. Signal and noise were presented in the sound booth at 0° azimuth under five test conditions: (1) right ear aided, (2) left ear aided, (3) both ears aided, (4) right ear aided, left ear plugged, and (5) unaided. The opposite ear in (1) and (2) was left open. In Experiment 2, binaural Knowles Electronics Manikin for Acoustic Research (KEMAR) manikin recordings made in Lou Malnati's pizza restaurant during a busy period provided a typical real-world noise, while prerecorded target sentences were presented through a small loudspeaker located in front of the KEMAR manikin. Subjects listened to the resulting binaural recordings through insert earphones under the following four conditions: (1) binaural, (2) diotic, (3) monaural left, and (4) monaural right. Results: Results of repeated measures ANOVAs demonstrated that the best speech recognition in noise performance was obtained by most participants with both ears aided in Experiment 1 and in the binaural condition in Experiment 2. Conclusions: In both experiments, only 20% of our subjects did better in noise with a single ear, roughly similar to the earlier Jerger et al (1993) finding that 8–10% of elderly hearing aid users preferred one hearing aid.


2017 ◽  
Vol 28 (09) ◽  
pp. 799-809 ◽  
Author(s):  
Meredith Spratford ◽  
Hannah Hodson McLean ◽  
Ryan McCreery

AbstractAccess to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.Thirty-five children, aged 5–12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children’s use of high-frequency audibility in a manner that approximates how they learn language.


2012 ◽  
Vol 23 (01) ◽  
pp. 064-073 ◽  
Author(s):  
Francis Kuk ◽  
Denise Keenan

Background: Directional microphones have been shown to improve a listener's ability to communicate in noise by improving the signal to noise ratio. However, their efficacy may be questioned in situations where the listener needs to understand speech originating from the back. Purpose: The goal of the study was to examine the performance of a directional microphone mode that has an automatic reverse cardioid polar pattern. Research Design: A single-blinded, factorial repeated-measures design was used to study the effect of microphone modes (reverse cardioid, omnidirectional, and front hypercardioid) and stimulus azimuths (front and back) on three outcome variables (aided thresholds, nonsense syllable identification in quiet, and sentence recognition in noise). Study Sample: Twenty adults with a mild-to-severe bilaterally symmetrical (±5 dB) sensorineural hearing loss participated. Intervention: Audibility in quiet was evaluated by obtaining aided sound field thresholds and speech identification at an input level of 50 dB SPL presented at 0 and 180° azimuths. In addition, speech understanding in noise was also assessed with the Hearing In Noise Test (HINT) sentences presented at both azimuths (0 and 180°) with a diffuse noise. Data Collection and Analysis: Repeated-measures analyses of variance (ANOVAs) were conducted to examine the effects of microphone mode (omnidirectional, front hypercardioid, reverse cardioid) and stimulus azimuth (0°, 180°) on aided thresholds, nonsense syllable identification, and HINT performance. Results: Results with the reverse cardioid directional microphone in both quiet conditions were similar to the omnidirectional microphone. The results of the reverse cardioid microphone in noise were significantly better than the omnidirectional microphone and front hypercardioid microphone when speech was presented from the back (p < 0.001). Conclusions: These results support the possible benefits of a reverse cardioid directional microphone when used in specific listening situations.


2011 ◽  
Vol 22 (05) ◽  
pp. 265-273 ◽  
Author(s):  
Francis Kuk ◽  
Heidi Peeters ◽  
Chi Lau ◽  
Petri Korhonen

Background: The maximum power output (MPO) of a hearing aid was typically discussed in the context of avoiding loudness discomfort. However, an MPO that is too low, as in the cases to avoid discomfort for people with a severe loudness tolerance problem and hearing losses that exceed the fitting range of the hearing aids, could negatively affect sound quality and speech intelligibility in noise. Purpose: The current study was designed to demonstrate the degradation in speech intelligibility in noise on the HINT (Hearing in Noise Test) when the MPO of the wearers' hearing aids was lowered by 10 dB from the default. The interactions with noise reduction (NR) algorithms (classic [NR-classic] and Speech Enhancer [NR-SE]) were also examined. Research Design: A single-blinded, factorial repeated-measures design was used to study the effect of noise input level (68 dBC, 75 dBC), MPO setting (default and default-10), and NR algorithm (off, classic, SE) on HINT performance. Study Sample: Eleven adults with a severe sensorineural hearing loss participated. Intervention: Participants were fit with the Widex m4-19 behind-the-ear hearing aids binaurally in the default frequency response and MPO settings. The hearing aids were adjusted to six MPO (default, default-10) by NR (off, classic, SE conditions). Testing was completed within one 2 hr session. Data Collection and Analysis: The RTS (reception threshold for speech) for 50% correct on the HINT was measured in each of the six hearing aid conditions at two input levels (68 and 75 dBC) with speech and noise stimuli presented from the front. Repeated-measures ANOVAs were conducted using SPSS software to examine significant differences. Results: A repeated-measures ANOVA showed that noise level was not significant while NR algorithm and MPO were significant. The interaction between noise level and NR algorithm was also significant. Post hoc analysis with Bonferroni adjustment for the effect of NR algorithm showed that performance with NR-off was significantly poorer than performance with NR-classic and NR-SE (p < 0.05). However, NR-classic and NR-SE were not significantly different from each other (p > 0.05). Conclusions: An MPO that was 10 dB lower than the default could negatively affect the signal-to-noise ratio (SNR) of the listening environment. However, NR could compensate for the degradation in SNR.


2005 ◽  
Vol 16 (09) ◽  
pp. 662-676 ◽  
Author(s):  
Brian E. Walden ◽  
Rauna K. Surr ◽  
Kenneth W. Grant ◽  
W. Van Summers ◽  
Mary T. Cord ◽  
...  

This study examined speech intelligibility and preferences for omnidirectional and directional microphone hearing aid processing across a range of signal-to-noise ratios (SNRs). A primary motivation for the study was to determine whether SNR might be used to represent distance between talker and listener in automatic directionality algorithms based on scene analysis. Participants were current hearing aid users who either had experience with omnidirectional microphone hearing aids only or with manually switchable omnidirectional/directional hearing aids. Using IEEE/Harvard sentences from a front loudspeaker and speech-shaped noise from three loudspeakers located behind and to the sides of the listener, the directional advantage (DA) was obtained at 11 SNRs ranging from -15 dB to +15 dB in 3 dB steps. Preferences for the two microphone modes at each of the 11 SNRs were also obtained using concatenated IEEE sentences presented in the speech-shaped noise. Results revealed that a DA was observed across a broad range of SNRs, although directional processing provided the greatest benefit within a narrower range of SNRs. Mean data suggested that microphone preferences were determined largely by the DA, such that the greater the benefit to speech intelligibility provided by the directional microphones, the more likely the listeners were to prefer that processing mode. However, inspection of the individual data revealed that highly predictive relationships did not exist for most individual participants. Few preferences for omnidirectional processing were observed. Overall, the results did not support the use of SNR to estimate the effects of distance between talker and listener in automatic directionality algorithms.


2016 ◽  
Vol 27 (01) ◽  
pp. 029-041 ◽  
Author(s):  
Jamie L. Desjardins

Background: Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. Purpose: The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Research Design: Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory–visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Study Sample: Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Results: Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Conclusions: Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications.


Author(s):  
Francis Kuk ◽  
Christopher Slugocki ◽  
Petri Korhonen

Abstract Background The effect of context on speech processing has been studied using different speech materials and response criteria. The Repeat-Recall Test (RRT) evaluates listener performance using high context (HC) and low context (LC) sentences; this may offer another platform for studying context use (CU). Objective This article aims to evaluate if the RRT may be used to study how different signal-to-noise ratios (SNRs), hearing aid technologies (directional microphone and noise reduction), and listener working memory capacities (WMCs) interact to affect CU on the different measures of the RRT. Design Double-blind, within-subject repeated measures design. Study Sample Nineteen listeners with a mild-to-moderately severe hearing loss. Data Collection The RRT was administered with participants wearing the study hearing aids under two microphone (omnidirectional vs. directional) by two noise reduction (on vs. off) conditions. Speech was presented from 0 degree at 75 dB sound pressure level and a continuous speech-shaped noise from 180 degrees at SNRs of 0, 5, 10, and 15 dB. The order of SNR and hearing aid conditions was counterbalanced across listeners. Each test condition was completed twice in two 2-hour sessions separated by 1 month. Results CU was calculated as the difference between HC and LC sentence scores for each outcome measure (i.e., repeat, recall, listening effort, and tolerable time). For all outcome measures, repeated measures analyses of variance revealed that CU was significantly affected by the SNR of the test conditions. For repeat, recall, and listening effort measures, these effects were qualified by significant two-way interactions between SNR and microphone mode. In addition, the WMC group significantly affected CU during recall and rating of listening effort, the latter of which was qualified by an interaction between the WMC group and SNR. Listener WMC affected CU on estimates of tolerable time as qualified by significant two-way interactions between SNR and microphone mode. Conclusion The study supports use of the RRT as a tool for measuring how listeners use sentence context to aid in speech processing. The degree to which context influenced scores on each outcome measure of the RRT was found to depend on complex interactions between the SNR of the listening environment, hearing aid features, and the WMC of the listeners.


2020 ◽  
Vol 31 (04) ◽  
pp. 262-270
Author(s):  
Francis Kuk ◽  
Christopher Slugocki ◽  
Petri Korhonen

Abstract Background Many studies on the efficacy of directional microphones (DIRMs) and noise-reduction (NR) algorithms were not conducted under realistic signal-to-noise ratio (SNR) conditions. A Repeat-Recall Test (RRT) was developed previously to partially address this issue. Purpose This study evaluated whether the RRT could provide a more comprehensive understanding of the efficacy of a DIRM and NR algorithm under realistic SNRs. Possible interaction with listener working memory capacity (WMC) was assessed. Research Design This study uses a double-blind, within-subject repeated measures design. Study Sample Nineteen listeners with a moderate degree of hearing loss participated. Data Collection and Analysis The RRT was administered with participants wearing the study hearing aids (HAs) under two microphones (omnidirectional versus directional) by two NR (on versus off) conditions. Speech was presented from 0° at 75 dB SPL and a continuous noise from 180° at SNRs of 0, 5, 10, and 15 dB. The order of SNR and HA conditions was counterbalanced across listeners. Each test condition was completed twice in two 2-hour sessions separated by one month. Results The recall scores of listeners were used to group listeners into good and poor WMC groups. Analysis using linear mixed-effects models revealed significant effects of context, SNR, and microphone for all four measures (repeat, recall, listening effort, and tolerable time). NR was only significant on the listening effort scale in the DIRM mode at an SNR of 5 dB. Listeners with good WMC performed better on all measures of the RRT and benefitted more from context. Although DIRM benefitted listeners with good and poor WMC, the benefits differed by context and SNR. Conclusions The RRT confirmed the efficacy of DIRM and NR on several outcome measures under realistic SNRs. It also highlighted interactions between WMC and sentence context on feature efficacy.


Sign in / Sign up

Export Citation Format

Share Document