Examining the Relationship Between Speech Recognition and a Spectral–Temporal Test With a Mixed Group of Hearing Aid and Cochlear Implant Users

2021 ◽  
Vol 64 (3) ◽  
pp. 1073-1080
Author(s):  
Justin M. Aronoff ◽  
Leah Duitsman ◽  
Deanna K. Matusik ◽  
Senad Hussain ◽  
Elise Lippmann

Purpose Audiology clinics have a need for a nonlinguistic test for assessing speech scores for patients using hearing aids or cochlear implants. One such test, the Spectral-Temporally Modulated Ripple Test Lite for computeRless Measurement (SLRM), has been developed for use in clinics, but it, as well as the related Spectral-Temporally Modulated Ripple Test, has primarily been assessed with cochlear implant users. The main goal of this study was to examine the relationship between SLRM and the Arizona Biomedical Institute Sentence Test (AzBio) for a mixed group of hearing aid and cochlear implant users. Method Adult hearing aid users and cochlear implant users were tested with SLRM, AzBio in quiet, and AzBio in multitalker babble with a +8 dB signal-to-noise ratio. Results SLRM scores correlated with both AzBio recognition scores in quiet and in noise. Conclusions The results indicated that there is a significant relationship between SLRM and AzBio scores when testing a mixed group of cochlear implant and hearing aid users. This suggests that SLRM may be a useful nonlinguistic test for use with individuals with a variety of hearing devices.

1980 ◽  
Vol 89 (5_suppl) ◽  
pp. 79-83
Author(s):  
Richard Lippmann

Following the Harvard master hearing aid study in 1947 there was little research on linear amplification. Recently, however, there have been a number of studies designed to determine the relationship between the frequency-gain characteristic of a hearing aid and speech intelligibility for persons with sensorineural hearing loss. These studies have demonstrated that a frequency-gain characteristic that rises at a rate of 6 dB/octave, as suggested by the Harvard study, is not optimal. They have also demonstrated that high-frequency emphasis of 10–40 dB above 500–1000 Hz is beneficial. Most importantly, they have demonstrated that hearing aids as they are presently being fit do not provide maximum speech intelligibility. Percent word correct scores obtained with the best frequency-gain characteristics tested in various studies have been found to be 9 to 19 percentage points higher than scores obtained with commercial aids owned by subjects. This increase in scores is equivalent to an increase in signal-to-noise ratio of 10 to 20 dB. This is a significant increase which could allow impaired listeners to communicate in many situations where they presently cannot. These results demonstrate the need for further research on linear amplification aimed at developing practical suggestions for fitting hearing aids.


2020 ◽  
Vol 24 ◽  
pp. 233121652093339
Author(s):  
Els Walravens ◽  
Gitte Keidser ◽  
Louise Hickson

Trainable hearing aids let users fine-tune their hearing aid settings in their own listening environment: Based on consistent user-adjustments and information about the acoustic environment, the trainable aids will change environment-specific settings to the user’s preference. A requirement for effective fine-tuning is consistency of preference for similar settings in similar environments. The aim of this study was to evaluate consistency of preference for settings differing in intensity, gain-frequency slope, and directionality when listening in simulated real-world environments and to determine if participants with more consistent preferences could be identified based on profile measures. A total of 52 adults (63–88 years) with hearing varying from normal to a moderate sensorineural hearing loss selected their preferred setting from pairs differing in intensity (3 or 6 dB), gain-frequency slope (±1.3 or ± 2.7 dB/octave), or directionality (omnidirectional vs. cardioid) in four simulated real-world environments: traffic noise, a monologue in traffic noise at 5 dB signal-to-noise ratio, and a dialogue in café noise at 5 and at 0 dB signal-to-noise ratio. Forced-choice comparisons were made 10 times for each combination of pairs of settings and environment. Participants also completed nine psychoacoustic, cognitive, and personality measures. Consistency of preference, defined by a setting preferred at least 9 out of 10 times, varied across participants. More participants obtained consistent preferences for larger differences between settings and less difficult environments. The profile measures did not predict consistency of preference. Trainable aid users could benefit from counselling to ensure realistic expectations for particular adjustments and listening situations.


2019 ◽  
Vol 28 (1) ◽  
pp. 101-113 ◽  
Author(s):  
Jenna M. Browning ◽  
Emily Buss ◽  
Mary Flaherty ◽  
Tim Vallier ◽  
Lori J. Leibold

Purpose The purpose of this study was to evaluate speech-in-noise and speech-in-speech recognition associated with activation of a fully adaptive directional hearing aid algorithm in children with mild to severe bilateral sensory/neural hearing loss. Method Fourteen children (5–14 years old) who are hard of hearing participated in this study. Participants wore laboratory hearing aids. Open-set word recognition thresholds were measured adaptively for 2 hearing aid settings: (a) omnidirectional (OMNI) and (b) fully adaptive directionality. Each hearing aid setting was evaluated in 3 listening conditions. Fourteen children with normal hearing served as age-matched controls. Results Children who are hard of hearing required a more advantageous signal-to-noise ratio than children with normal hearing to achieve comparable performance in all 3 conditions. For children who are hard of hearing, the average improvement in signal-to-noise ratio when comparing fully adaptive directionality to OMNI was 4.0 dB in noise, regardless of target location. Children performed similarly with fully adaptive directionality and OMNI settings in the presence of the speech maskers. Conclusions Compared to OMNI, fully adaptive directionality improved speech recognition in steady noise for children who are hard of hearing, even when they were not facing the target source. This algorithm did not affect speech recognition when the background noise was speech. Although the use of hearing aids with fully adaptive directionality is not proposed as a substitute for remote microphone systems, it appears to offer several advantages over fixed directionality, because it does not depend on children facing the target talker and provides access to multiple talkers within the environment. Additional experiments are required to further evaluate children's performance under a variety of spatial configurations in the presence of both noise and speech maskers.


2019 ◽  
Vol 28 (3S) ◽  
pp. 762-774 ◽  
Author(s):  
Roberta Anzivino ◽  
Guido Conti ◽  
Walter Di Nardo ◽  
Anna Rita Fetoni ◽  
Pasqualina Maria Picciotti ◽  
...  

Objective Recent literature has shown a growing interest in the relationship between presbycusis and cognitive decline, but significant evidence about the long-term benefit of rehabilitation on cognitive functions has not been reported yet. The aim of the study was to analyze audiological and neuropsychological performances in patients with cochlear implant (CI) or hearing aids (HAs) over time. Materials and Method Forty-four bilaterally deaf patients aged more than 60 years (25 with CI candidacy and 19 with HA candidacy) were enrolled. Patients were subjected to audiological evaluation, to a battery of neuropsychological tests (Mini-Mental State Examination [MMSE], Rey Auditory Verbal Learning Task [RAVLT], Rey–Osterreith Complex Figure Test, Digit/Corsi Span Forward and Backward, Multiple Features Target Cancellation, Trail-Making Test, Stroop Test, and Phonological and Semantic Word Fluency), and to a quality of life assessment (Short Form 36, Glasgow Benefit Inventory, Glasgow Health Status Inventory) at the baseline and after a long-term follow-up (6–12 months). Results Speech recognition scores in quiet and in noise were significantly improved even 6 months after auditory rehabilitation. Significant differences between pre- and post-rehabilitation scores were reported in physical and emotional impacts in life, general global health, vitality, and social activities. MMSE and RAVLT scores were significantly improved in both groups after 6 months of follow-up, suggesting a global involvement of memory domain. Mnesic performances remained unchanged between the first and second follow-up, but a further significant improvement in executive functions (Stroop Test) was detected in patients with CI reevaluated 12 months after implantation. A significant correlation of the RAVLT with signal-to-noise ratio at +10 dB speech-in-noise scores and the MMSE with signal-to-noise ratio at 0 dB speech-in-noise scores suggests the pivotal role of executive functions in recognition in noisy environment. Conclusions Our preliminary data confirm that hearing deprivation in aged patients represents a truly modifiable risk factor for cognitive decline, which can be positively faced by acoustic rehabilitation. The improvement of short- and long-term memory performances and the amelioration of executive and attentive functions suggest that hearing restoration with both HAs and CI may provide a recovery of superior cognitive domains probably through a reallocation of cortical resources altered by hearing deprivation.


2004 ◽  
Vol 15 (05) ◽  
pp. 342-352 ◽  
Author(s):  
Therese C. Walden ◽  
Brian E. Walden

Persons with impaired hearing who are candidates for amplification are not all equally successful with hearing aids in daily living. Having the ability to predict success with amplification in everyday life from measures that can be obtained during an initial evaluation of the patient's candidacy would result in greater patient satisfaction with hearing aids and more efficient use of clinical resources. This study investigated the relationship between various demographic and audiometric measures, and two measures of hearing aid success in 50 hearing aid wearers. Audiometric predictors included measures of audibility and suprathreshold distortion. The unaided and aided signal-to-noise ratio (SNR) loss on the QuickSIN test provided the best predictors of hearing aid success in daily living. However, much of this predictive relationship appeared attributable to the patient's age.


2014 ◽  
Vol 25 (10) ◽  
pp. 952-968 ◽  
Author(s):  
Stephen Julstrom ◽  
Linda Kozma-Spytek

Background: In order to better inform the development and revision of the American National Standards Institute C63.19 and American National Standards Institute/Telecommunications Industry Association-1083 hearing aid compatibility standards, a previous study examined the signal strength and signal (speech)-to-noise (interference) ratio needs of hearing aid users when using wireless and cordless phones in the telecoil coupling mode. This study expands that examination to cochlear implant (CI) users, in both telecoil and microphone modes of use. Purpose: The purpose of this study was to evaluate the magnetic and acoustic signal levels needed by CI users for comfortable telephone communication and the users’ tolerance relative to the speech levels of various interfering wireless communication–related noise types. Research Design: Design was a descriptive and correlational study. Simulated telephone speech and eight interfering noise types presented as continuous signals were linearly combined and were presented together either acoustically or magnetically to the participants’ CIs. The participants could adjust the loudness of the telephone speech and the interfering noises based on several assigned criteria. Study Sample: The 21 test participants ranged in age from 23–81 yr. All used wireless phones with their CIs, and 15 also used cordless phones at home. There were 12 participants who normally used the telecoil mode for telephone communication, whereas 9 used the implant’s microphone; all were tested accordingly. Data Collection and Analysis: A guided-intake questionnaire yielded general background information for each participant. A custom-built test control box fed by prepared speech-and-noise files enabled the tester or test participant, as appropriate, to switch between the various test signals and to precisely control the speech-and-noise levels independently. The tester, but not the test participant, could read and record the selected levels. Subsequent analysis revealed the preferred speech levels, speech (signal)-to-noise ratios, and the effect of possible noise-measurement weighting functions. Results: The participants' preferred telephone speech levels subjectively matched or were somewhat lower than the level that they heard from a 65 dB SPL wideband reference. The mean speech (signal)-to-noise ratio requirement for them to consider their telephone experience “acceptable for normal use” was 20 dB, very similar to the results for the hearing aid users of the previous study. Significant differences in the participants’ apparent levels of noise tolerance among the noise types when the noise level was determined using A-weighting were eliminated when a CI-specific noise-measurement weighting was applied. Conclusions: The results for the CI users in terms of both preferred levels for wireless and cordless phone communication and signal-to-noise requirements closely paralleled the corresponding results for hearing aid users from the previous study, and showed no significant differences between the microphone and telecoil modes of use. Signal-to-noise requirements were directly related to the participants’ noise audibility threshold and were independent of noise type when appropriate noise-measurement weighting was applied. Extending the investigation to include noncontinuous interfering noises and forms of radiofrequency interference other than additive audiofrequency noise could be areas of future study.


Author(s):  
Shubha Tak ◽  
Asha Yathiraj

Abstract Introduction Loudness perception is considered important for the perception of emotions, relative distance and stress patterns. However, certain digital hearing devices worn by those with hearing impairment may affect their loudness perception. This could happen in devices that have compression circuits to make loud sounds soft and soft sounds loud. These devices could hamper children from gaining knowledge about loudness of acoustical signals. Objective To compare relative loudness judgment of children using listening devices with age-matched typically developing children. Methods The relative loudness judgment of sounds created by day-to-day objects were evaluated on 60 children (20 normal-hearing, 20 hearing aid users, & 20 cochlear implant users), utilizing a standard group comparison design. Using a two-alternate forced-choice technique, the children were required to select picturized sound sources that were louder. Results The majority of the participants obtained good scores and poorer scores were mainly obtained by children using cochlear implants. The cochlear implant users obtained significantly lower scores than the normal-hearing participants. However, the scores were not significantly different between the normal-hearing children and the hearing aid users as well as between the two groups with hearing impairment. Conclusion Thus, despite loudness being altered by listening devices, children using non-linear hearing aids or cochlear implants are able to develop relative loudness judgment for acoustic stimuli. However, loudness growth for electrical stimuli needs to be studied.


2005 ◽  
Vol 16 (09) ◽  
pp. 662-676 ◽  
Author(s):  
Brian E. Walden ◽  
Rauna K. Surr ◽  
Kenneth W. Grant ◽  
W. Van Summers ◽  
Mary T. Cord ◽  
...  

This study examined speech intelligibility and preferences for omnidirectional and directional microphone hearing aid processing across a range of signal-to-noise ratios (SNRs). A primary motivation for the study was to determine whether SNR might be used to represent distance between talker and listener in automatic directionality algorithms based on scene analysis. Participants were current hearing aid users who either had experience with omnidirectional microphone hearing aids only or with manually switchable omnidirectional/directional hearing aids. Using IEEE/Harvard sentences from a front loudspeaker and speech-shaped noise from three loudspeakers located behind and to the sides of the listener, the directional advantage (DA) was obtained at 11 SNRs ranging from -15 dB to +15 dB in 3 dB steps. Preferences for the two microphone modes at each of the 11 SNRs were also obtained using concatenated IEEE sentences presented in the speech-shaped noise. Results revealed that a DA was observed across a broad range of SNRs, although directional processing provided the greatest benefit within a narrower range of SNRs. Mean data suggested that microphone preferences were determined largely by the DA, such that the greater the benefit to speech intelligibility provided by the directional microphones, the more likely the listeners were to prefer that processing mode. However, inspection of the individual data revealed that highly predictive relationships did not exist for most individual participants. Few preferences for omnidirectional processing were observed. Overall, the results did not support the use of SNR to estimate the effects of distance between talker and listener in automatic directionality algorithms.


2010 ◽  
Vol 20 (2) ◽  
pp. 70-75 ◽  
Author(s):  
Lisa S. Davidson

Cochlear implant (CI) candidacy guidelines continue to evolve as a result of advances in both cochlear implant and hearing aid technology. Empirical studies comparing the speech perception abilities of children using cochlear implants or hearing aids will be reviewed in the context of current device technology and CI candidacy evaluations.


2014 ◽  
Vol 57 (4) ◽  
pp. 1512-1520 ◽  
Author(s):  
Michelle Mason ◽  
Kostas Kokkinakis

Purpose The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. Method Eight postlingually deafened adult cochlear implant (CI) listeners with a fully inserted CI in 1 ear and low-frequency hearing in the other ear were tested on consonant perception. They were presented with consonant stimuli processed in the following experimental conditions: 1 quiet condition, 2 different reverberation times (0.3 s and 1.0 s), and the combination of 2 reverberation times with a single signal-to-noise ratio (5 dB). Results Consonant perception improved significantly when listening in combination with a contralateral hearing aid as opposed to listening with a CI alone in 0.3 s and 1.0 s of reverberation. Significantly higher scores were also noted when noise was added to 0.3 s of reverberation. Conclusions A considerable benefit was noted from the additional acoustic information in conditions of reverberation and reverberation plus noise. The bimodal benefit observed was more pronounced for voicing and manner of articulation than for place of articulation.


Sign in / Sign up

Export Citation Format

Share Document