A Proposed Electroacoustic Test Protocol for Personal FM Receivers Coupled to Cochlear Implant Sound Processors

2013 ◽  
Vol 24 (10) ◽  
pp. 941-954 ◽  
Author(s):  
Erin C. Schafer ◽  
Elizabeth Musgrave ◽  
Sadaf Momin ◽  
Carl Sandrock ◽  
Denise Romine

Background: Current fitting guidelines from the American Academy of Audiology (Academy) support the use of objective electroacoustic measures and behavioral testing when fitting frequency modulation (FM) systems to hearing aids. However, only behavioral testing is recommended when fitting FM systems to individuals with cochlear implants (CIs) because a protocol for conducting electroacoustic measures has yet to be developed for this population. Purpose: The purpose of this study was to propose and examine the validity of a newly developed, objective, electroacoustic test protocol for fitting electrically and electromagnetically coupled FM systems to CI sound processors. Research Design: Electroacoustic measures were conducted and replicated in the laboratory with three contemporary CI sound processors and several FM system combinations. A repeated measures design was used with four participants to examine the validity of the proposed electroacoustic test protocol. Study Sample: Three contemporary CI sound processors were tested electroacoustically in the laboratory while coupled to combinations of five FM receivers and four FM transmitters. Two adolescents using Cochlear Nucleus 5 sound processors and two adult participants using MED-EL OPUS 2 sound processors completed behavioral and subjective measures. Data Collection and Analysis: Using current hearing aid practice guidelines from the Academy, electroacoustic measurements were conducted in the laboratory with the CIs and FM systems to determine transparency, where equivalent inputs to the CI and FM microphones result in equivalent outputs. Using a hearing aid analyzer, acoustic output from the CI sound processor was measured via monitor earphones and specialized equipment from CI manufacturers with 65 dB SPL speech inputs (1) to the sound processor and (2) to the FM transmitter microphones. The FM gain or volume was adjusted to attempt to achieve transparency for outputs from the two input devices. The four participants completed some or all of the following measures: speech recognition in noise without and with two FM systems in a classroom, loudness ratings without and with two FM systems measures in a quiet condition in a classroom, and questionnaires. Results: Transparency was achieved for most CI and FM combinations, but most systems required adjustments to FM gain or volume relative to the manufacturer default setting. Despite adjustments to the systems, transparency was not attainable for some FM receiver and transmitter combinations. Behavioral testing in four participants provided preliminary support to the proposed electroacoustic test protocol. Conclusions: Valid and reliable electroacoustic test measures may be feasible with CIs coupled to FM systems with specialized equipment from the CI manufacturer. Advances in equipment available for electroacoustic testing with these devices as well as additional research will lend further support to this objective approach to fitting FM systems to CIs.

2015 ◽  
Vol 26 (05) ◽  
pp. 478-493 ◽  
Author(s):  
Francis Kuk ◽  
Eric Seper ◽  
Chi Lau ◽  
Bryan Crose ◽  
Petri Korhonen

Background: Bilateral contralateral routing of signals (BiCROS) hearing aids function to restore audibility of sounds originating from the side of the unaidable ear. However, when speech is presented to the side of the aidable ear and noise to the side of the unaidable ear, a BiCROS arrangement may reduce intelligibility of the speech signal. This negative effect may be circumvented if an on/off switch is available on the contralateral routing of signals (CROS) transmitter. Purpose: This study evaluated if the proper use of the on/off switch on a CROS transmitter could enhance speech recognition in noise and sound localization abilities. The participants’ subjective reactions to the use of the BiCROS, including the use of the on/off switch in real-life were also evaluated. Research Design: A between-subjects, repeated-measures design was used to assess differences in speech recognition (in quiet and in noise) and localization abilities under four hearing aid conditions (unaided, unilaterally aided, fixed BiCROS setting, and adjusted BiCROS setting) with speech and noise stimuli presented from different azimuths. Participants were trained on the use of the on/off switch on the BiCROS transmitter before testing in the adjusted BiCROS settings. Subjective ratings were obtained with the Speech, Spatial, and Sound Quality (SSQ) questionnaire and a custom questionnaire. Study Sample: Nine adult BiCROS candidates participated in this study. Data Collection and Analysis: Participants wore the Widex Dream-m-CB hearing aid on the aidable ear for 1 week. They then wore the BiCROS for the remainder of the study. Speech recognition and localization testing were completed in four hearing aid conditions (unaided, unilateral aided, fixed BiCROS, and adjusted BiCROS). Speech recognition was evaluated during the first three visits, whereas localization was evaluated over the course of the study. Participants completed the SSQ questionnaire before each visit. The CROS questionnaire was completed at the final visit. A repeated measures analysis of variance with Bonferroni post hoc analysis was used to evaluate the significance of the results on speech recognition, localization, and the SSQ. Results: The results revealed that the adjusted BiCROS condition improved speech recognition scores by 20 rau (rationalized arcsine unit) when speech was presented to the aidable ear and localization by 37% when sounds are presented from the side of the unaidable ear over the fixed BiCROS condition. Statistically significant benefit on the SSQ was also noted with the adjusted BiCROS condition compared to the unilateral fitting. Conclusions: These findings supported the value of an on/off switch on a CROS transmitter because it allows convenient selective transmission of sounds. It also highlighted the importance of instructions and practice in using the BiCROS hearing aid successfully.


2015 ◽  
Vol 26 (08) ◽  
pp. 724-731 ◽  
Author(s):  
Krishna S. Rodemerk ◽  
Jason A. Galster

Background: Many studies have reported the speech recognition benefits of a personal remote microphone system when used by adult listeners with hearing loss. The advance of wireless technology has allowed for many wireless audio transmission protocols. Some of these protocols interface with commercially available hearing aids. As a result, commercial remote microphone systems use a variety of different protocols for wireless audio transmission. It is not known how these systems compare, with regard to adult speech recognition in noise. Purpose: The primary goal of this investigation was to determine the speech recognition benefits of four different commercially available remote microphone systems, each with a different wireless audio transmission protocol. Research Design: A repeated-measures design was used in this study. Study Sample: Sixteen adults, ages 52 to 81 yr, with mild to severe sensorineural hearing loss participated in this study. Intervention: Participants were fit with three different sets of bilateral hearing aids and four commercially available remote microphone systems (FM, 900 MHz, 2.4 GHz, and Bluetooth® paired with near-field magnetic induction). Data Collection and Analysis: Speech recognition scores were measured by an adaptive version of the Hearing in Noise Test (HINT). The participants were seated both 6 and 12′ away from the talker loudspeaker. Participants repeated HINT sentences with and without hearing aids and with four commercially available remote microphone systems in both seated positions with and without contributions from the hearing aid or environmental microphone (24 total conditions). The HINT SNR-50, or the signal-to-noise ratio required for correct repetition of 50% of the sentences, was recorded for all conditions. A one-way repeated measures analysis of variance was used to determine statistical significance of microphone condition. Results: The results of this study revealed that use of the remote microphone systems statistically improved speech recognition in noise relative to unaided and hearing aid-only conditions across all four wireless transmission protocols at 6 and 12′ away from the talker. Conclusions: Participants showed a significant improvement in speech recognition in noise when comparing four remote microphone systems with different wireless transmission methods to hearing aids alone.


2012 ◽  
Vol 23 (03) ◽  
pp. 171-181 ◽  
Author(s):  
Rachel A. McArdle ◽  
Mead Killion ◽  
Monica A. Mennite ◽  
Theresa H. Chisolm

Background: The decision to fit one or two hearing aids in individuals with binaural hearing loss has been debated for years. Although some 78% of U.S. hearing aid fittings are binaural (Kochkin , 2010), Walden and Walden (2005) presented data showing that 82% (23 of 28 patients) of their sample obtained significantly better speech recognition in noise scores when wearing one hearing aid as opposed to two. Purpose: To conduct two new experiments to fuel the monaural/binaural debate. The first experiment was a replication of Walden and Walden (2005), whereas the second experiment examined the use of binaural cues to improve speech recognition in noise. Research Design: A repeated measures experimental design. Study Sample: Twenty veterans (aged 59–85 yr), with mild to moderately severe binaurally symmetrical hearing loss who wore binaural hearing aids were recruited from the Audiology Department at the Bay Pines VA Healthcare System. Data Collection and Analysis: Experiment 1 followed the procedures of the Walden and Walden study, where signal-to-noise ratio (SNR) loss was measured using the Quick Speech-in-Noise (QuickSIN) test on participants who were aided with their current hearing aids. Signal and noise were presented in the sound booth at 0° azimuth under five test conditions: (1) right ear aided, (2) left ear aided, (3) both ears aided, (4) right ear aided, left ear plugged, and (5) unaided. The opposite ear in (1) and (2) was left open. In Experiment 2, binaural Knowles Electronics Manikin for Acoustic Research (KEMAR) manikin recordings made in Lou Malnati's pizza restaurant during a busy period provided a typical real-world noise, while prerecorded target sentences were presented through a small loudspeaker located in front of the KEMAR manikin. Subjects listened to the resulting binaural recordings through insert earphones under the following four conditions: (1) binaural, (2) diotic, (3) monaural left, and (4) monaural right. Results: Results of repeated measures ANOVAs demonstrated that the best speech recognition in noise performance was obtained by most participants with both ears aided in Experiment 1 and in the binaural condition in Experiment 2. Conclusions: In both experiments, only 20% of our subjects did better in noise with a single ear, roughly similar to the earlier Jerger et al (1993) finding that 8–10% of elderly hearing aid users preferred one hearing aid.


2017 ◽  
Vol 28 (01) ◽  
pp. 046-057 ◽  
Author(s):  
Petri Korhonen ◽  
Francis Kuk ◽  
Eric Seper ◽  
Martin Mørkebjerg ◽  
Majken Roikjer

AbstractWind noise is a common problem reported by hearing aid wearers. The MarkeTrak VIII reported that 42% of hearing aid wearers are not satisfied with the performance of their hearing aids in situations where wind is present.The current study investigated the effect of a new wind noise attenuation (WNA) algorithm on subjective annoyance and speech recognition in the presence of wind.A single-blinded, repeated measures design was used.Fifteen experienced hearing aid wearers with bilaterally symmetrical (≤10 dB) mild-to-moderate sensorineural hearing loss participated in the study.Subjective rating for wind noise annoyance was measured for wind presented alone from 0° and 290° at wind speeds of 4, 5, 6, 7, and 10 m/sec. Phoneme identification performance was measured using Widex Office of Clinical Amplification Nonsense Syllable Test presented at 60, 65, 70, and 75 dB SPL from 270° in the presence of wind originating from 0° at a speed of 5 m/sec.The subjective annoyance from wind noise was reduced for wind originating from 0° at wind speeds from 4 to 7 m/sec. The largest improvement in phoneme identification with the WNA algorithm was 48.2% when speech was presented from 270° at 65 dB SPL and the wind originated from 0° azimuth at 5 m/sec.The WNA algorithm used in this study reduced subjective annoyance for wind speeds ranging from 4 to 7 m/sec. The algorithm was effective in improving speech identification in the presence of wind originating from 0° at 5 m/sec. These results suggest that the WNA algorithm used in the current study could expand the range of real-life situations where a hearing-impaired person can use the hearing aid optimally.


2016 ◽  
Vol 25 (3) ◽  
pp. 161-166 ◽  
Author(s):  
Naomi B. H. Croghan ◽  
Anne M. Swanberg ◽  
Melinda C. Anderson ◽  
Kathryn H. Arehart

Purpose The objective of this study was to describe chosen listening levels (CLLs) for recorded music for listeners with hearing loss in aided and unaided conditions. Method The study used a within-subject, repeated-measures design with 13 adult hearing-aid users. The music included rock and classical samples with different amounts of audio-industry compression limiting. CLL measurements were taken at ear level (i.e., at input to the hearing aid) and at the tympanic membrane. Results For aided listening, average CLLs were 69.3 dBA at the input to the hearing aid and 80.3 dBA at the tympanic membrane. For unaided listening, average CLLs were 76.9 dBA at the entrance to the ear canal and 77.1 dBA at the tympanic membrane. Although wide intersubject variability was observed, CLLs were not associated with audiometric thresholds. CLLs for rock music were higher than for classical music at the tympanic membrane, but no differences were observed between genres for ear-level CLLs. The amount of audio-industry compression had no significant effect on CLLs. Conclusion By describing the levels of recorded music chosen by hearing-aid users, this study provides a basis for ecologically valid testing conditions in clinical and laboratory settings.


2017 ◽  
Vol 28 (07) ◽  
pp. 625-635
Author(s):  
Erika L. Nair ◽  
Rhonda Sousa ◽  
Shannon Wannagot

AbstractGuidelines established by the AAA currently recommend behavioral testing when fitting frequency modulated (FM) systems to individuals with cochlear implants (CIs). A protocol for completing electroacoustic measures has not yet been validated for personal FM systems or digital modulation (DM) systems coupled to CI sound processors. In response, some professionals have used or altered the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting FM systems to CI sound processors. More recently steps were outlined in a proposed protocol.The purpose of this research is to review and compare the electroacoustic test measures outlined in a 2013 article by Schafer and colleagues in the Journal of the American Academy of Audiology titled “A Proposed Electroacoustic Test Protocol for Personal FM Receivers Coupled to Cochlear Implant Sound Processors” to the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting DM systems to CI users.Electroacoustic measures were conducted on 71 CI sound processors and Phonak Roger DM systems using a proposed protocol and an adapted AAA protocol. Phonak’s recommended default receiver gain setting was used for each CI sound processor manufacturer and adjusted if necessary to achieve transparency.Electroacoustic measures were conducted on Cochlear and Advanced Bionics (AB) sound processors. In this study, 28 Cochlear Nucleus 5/CP810 sound processors, 26 Cochlear Nucleus 6/CP910 sound processors, and 17 AB Naida CI Q70 sound processors were coupled in various combinations to Phonak Roger DM dedicated receivers (25 Phonak Roger 14 receivers—Cochlear dedicated receiver—and 9 Phonak Roger 17 receivers—AB dedicated receiver) and 20 Phonak Roger Inspiro transmitters.Employing both the AAA and the Schafer et al protocols, electroacoustic measurements were conducted with the Audioscan Verifit in a clinical setting on 71 CI sound processors and Phonak Roger DM systems to determine transparency and verify FM advantage, comparing speech inputs (65 dB SPL) in an effort to achieve equal outputs. If transparency was not achieved at Phonak’s recommended default receiver gain, adjustments were made to the receiver gain. The integrity of the signal was monitored with the appropriate manufacturer’s monitor earphones.Using the AAA hearing aid protocol, 50 of the 71 CI sound processors achieved transparency, and 59 of the 71 CI sound processors achieved transparency when using the proposed protocol at Phonak’s recommended default receiver gain. After the receiver gain was adjusted, 3 of 21 CI sound processors still did not meet transparency using the AAA protocol, and 2 of 12 CI sound processors still did not meet transparency using the Schafer et al proposed protocol.Both protocols were shown to be effective in taking reliable electroacoustic measurements and demonstrate transparency. Both protocols are felt to be clinically feasible and to address the needs of populations that are unable to reliably report regarding the integrity of their personal DM systems.


Author(s):  
Sharon Miller ◽  
Jace Wolfe ◽  
Mila Duke ◽  
Erin Schafer ◽  
Smita Agrawal ◽  
...  

Abstract Background Cochlear implant (CI) recipients frequently experience difficulty understanding speech over the telephone and rely on hearing assistive technology (HAT) to improve performance. Bilateral inter-processor audio streaming technology using nearfield magnetic induction is an advanced technology incorporated within a hearing aid or CI processor that can deliver telephone audio signals captured at one sound processor to the sound processor at the opposite ear. To date, limited data exist examining the efficacy of this technology in CI users to improve speech understanding on the telephone. Purpose The primary objective of this study was to examine telephone speech recognition outcomes in bilateral CI recipients in a bilateral inter-processor audio streaming condition (DuoPhone) compared with a monaural condition (i.e., telephone listening with one sound processor) in quiet and in background noise. Outcomes in the monaural and bilateral conditions using either a telecoil or T-Mic2 technology were also assessed. The secondary aim was to examine how deactivating microphone input in the contralateral processor in the bilateral wireless streaming conditions, and thereby modifying the signal-to-noise ratio, affected speech recognition in noise. Research Design A repeated-measures design was used to evaluate speech recognition performance in quiet and competing noise with the telephone signal transmitted acoustically or via the telecoil to the ipsilateral sound processor microphone in monaural and bilateral wireless streaming listening conditions. Study Sample Nine bilateral CI users with Advanced Bionics HiRes 90K and/or CII devices were included in the study. Data Collection and Analysis The effects of phone input (monaural [DuoPhone Off] vs. bilateral [DuoPhone on]) and processor input (T-Mic2 vs. telecoil) on word recognition in quiet and noise were assessed using separate repeated-measures analysis of variance. Effect of the contralateral device mic deactivation on speech recognition outcomes for the T-Mic2 DuoPhone conditions was assessed using paired Student's t-tests. Results Telephone speech recognition was significantly better in the bilateral inter-processor streaming conditions relative to the monaural conditions in both quiet and noise. Speech recognition outcomes were similar in quiet and noise when using the T-Mic2 and telecoil in the monaural and bilateral conditions. For the acoustic DuoPhone conditions using the T-Mic2, speech recognition in noise was significantly better when the microphone of the contralateral processor was disabled. Conclusion Inter-processor audio streaming allows for bilateral listening on the telephone and produces better speech recognition in quiet and in noise compared with monaural listening conditions for adult CI recipients.


Author(s):  
Francis Kuk ◽  
Christopher Slugocki ◽  
Petri Korhonen

Abstract Background The effect of context on speech processing has been studied using different speech materials and response criteria. The Repeat-Recall Test (RRT) evaluates listener performance using high context (HC) and low context (LC) sentences; this may offer another platform for studying context use (CU). Objective This article aims to evaluate if the RRT may be used to study how different signal-to-noise ratios (SNRs), hearing aid technologies (directional microphone and noise reduction), and listener working memory capacities (WMCs) interact to affect CU on the different measures of the RRT. Design Double-blind, within-subject repeated measures design. Study Sample Nineteen listeners with a mild-to-moderately severe hearing loss. Data Collection The RRT was administered with participants wearing the study hearing aids under two microphone (omnidirectional vs. directional) by two noise reduction (on vs. off) conditions. Speech was presented from 0 degree at 75 dB sound pressure level and a continuous speech-shaped noise from 180 degrees at SNRs of 0, 5, 10, and 15 dB. The order of SNR and hearing aid conditions was counterbalanced across listeners. Each test condition was completed twice in two 2-hour sessions separated by 1 month. Results CU was calculated as the difference between HC and LC sentence scores for each outcome measure (i.e., repeat, recall, listening effort, and tolerable time). For all outcome measures, repeated measures analyses of variance revealed that CU was significantly affected by the SNR of the test conditions. For repeat, recall, and listening effort measures, these effects were qualified by significant two-way interactions between SNR and microphone mode. In addition, the WMC group significantly affected CU during recall and rating of listening effort, the latter of which was qualified by an interaction between the WMC group and SNR. Listener WMC affected CU on estimates of tolerable time as qualified by significant two-way interactions between SNR and microphone mode. Conclusion The study supports use of the RRT as a tool for measuring how listeners use sentence context to aid in speech processing. The degree to which context influenced scores on each outcome measure of the RRT was found to depend on complex interactions between the SNR of the listening environment, hearing aid features, and the WMC of the listeners.


2012 ◽  
Vol 23 (05) ◽  
pp. 366-378 ◽  
Author(s):  
Daniel B. Putterman ◽  
Michael Valente

Background: A telecoil (t-coil) is essential for hearing aid users when listening on the telephone because using the hearing aid microphone when communicating on the telephone can cause feedback due to telephone handset proximity to the hearing aid microphone. Clinicians may overlook the role of the t-coil due to a primary concern of matching the microphone frequency response to a valid prescriptive target. Little has been published to support the idea that the t-coil frequency response should match the microphone frequency response to provide “seamless” and perhaps optimal performance on the telephone. If the clinical goal were to match both frequency responses, it would be useful to know the relative differences, if any, that currently exist between these two transducers. Purpose: The primary purpose of this study was to determine if statistically significant differences were present between the mean output (in dB SPL) of the programmed microphone program and the hearing aid manufacturer's default t-coil program as a function of discrete test frequencies. In addition, pilot data are presented on the feasibility of measuring the microphone and t-coil frequency response with real-ear measures using a digital speech-weighted noise. Research Design: A repeated-measures design was utilized for a 2-cc coupler measurement condition. Independent variables were the transducer (microphone, t-coil) and 11 discrete test frequencies (15 discrete frequencies in the real-ear pilot condition). Study Sample: The study sample was comprised of behind-the-ear (BTE) hearing aids from one manufacturer. Fifty-two hearing aids were measured in a coupler condition, 39 of which were measured in the real-ear pilot condition. Hearing aids were previously programmed and verified using real-ear measures to the NAL-NL1 (National Acoustic Laboratories—Non-linear 1) prescriptive target by a licensed audiologist. Data Collection and Analysis: Hearing aid output was measured with a Fonix 7000 hearing aid analyzer (Frye Electronics, Inc.) in a HA-2 2-cc coupler condition using a pure-tone sweep at an input level of 60 dB SPL with the hearing aid in the microphone program and 31.6 mA/M in the t-coil program. A digital speech weighted noise input signal presented at additional input levels was used in the real-ear pilot condition. A mixed-model repeated-measures analysis of variance (ANOVA) and the Tukey Honestly Significant Difference (HSD) post hoc test were utilized to determine if significant differences were present in performance across treatment levels. Results: There was no significant difference between mean overall t-coil and microphone output averaged across 11 discrete frequencies (F(1,102) = 0, p < 0.98). A mixed-model repeated-measures ANOVA revealed a significant transducer by frequency interaction (F(10,102) = 13.0, p < 0.0001). Significant differences were present at 200 and 400 Hz where the mean t-coil output was less than the mean microphone output, and at 4000, 5000, and 6300 Hz where the mean t-coil output was greater than the mean microphone output. Conclusions: The mean t-coil output was significantly lower than the mean microphone output at 400 Hz, a frequency that lies within the typical telephone bandwidth of 300–3300 Hz. This difference may partially help to explain why some patients often complain the t-coil fails to provide sufficient loudness for telephone communication.


2010 ◽  
Vol 21 (08) ◽  
pp. 546-557 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente ◽  
Jessica Kerckhoff

Background: Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. Purpose: The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. Research Design: A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Study Sample: Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University's Center for Advanced Medicine and the surrounding area. Data Collection and Analysis: Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results: Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). Conclusions: The Divino's DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments.


Sign in / Sign up

Export Citation Format

Share Document