Some Effects of Training on Speech Recognition by Hearing-Impaired Adults

1981 ◽  
Vol 24 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Brian E. Walden ◽  
Sue A. Erdman ◽  
Allen A. Montgomery ◽  
Daniel M. Schwartz ◽  
Robert A. Prosek

The purpose of this research was to determine some of the effects of consonant recognition training on the speech recognition performance of hearing-impaired adults. Two groups of ten subjects each received seven hours of either auditory or visual consonant recognition training, in addition to a standard two-week, group-oriented, inpatient aural rehabilitation program. A third group of fifteen subjects received the standard two-week program, but no supplementary individual consonant recognition training. An audiovisual sentence recognition test, as well as tests of auditory and visual consonant recognition, were administered both before and ibltowing training. Subjects in all three groups significantly increased in their audiovisual sentence recognition performance, but subjects receiving the individual consonant recognition training improved significantly more than subjects receiving only the standard two-week program. A significant increase in consonant recognition performance was observed in the two groups receiving the auditory or visual consonant recognition training. The data are discussed from varying statistical and clinical perspectives.

2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2021 ◽  
Vol 32 (08) ◽  
pp. 528-536
Author(s):  
Jessica H. Lewis ◽  
Irina Castellanos ◽  
Aaron C. Moberly

Abstract Background Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. Purpose The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. Research Design Correlational study design. Study Sample Twenty-one NH college students. Data Collection and Analysis Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. Results Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. Conclusions Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.


1974 ◽  
Vol 17 (2) ◽  
pp. 270-278 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

The redundancy between the auditory and visual recognition of consonants was studied in 100 hearing-impaired subjects who demonstrated a wide range of speech-discrimination abilities. Twenty English consonants, recorded in CV combination with the vowel /a/, were presented to the subjects for auditory, visual, and audiovisual identification. There was relatively little variation among subjects in the visual recognition of consonants. A measure of the expected degree of redundancy between an observer’s auditory and visual confusions among consonants was used in an effort to predict audiovisual consonant recognition ability. This redundancy measure was based on an information analysis of an observer’s auditory confusions among consonants and expressed the degree to which his auditory confusions fell within categories of visually homophenous consonants. The measure was found to have moderate predictive value in estimating an observer’s audiovisual consonant recognition score. These results suggest that the degree of redundancy between an observer’s auditory and visual confusions of speech elements is a determinant in the benefit that visual cues offer to that observer.


1991 ◽  
Vol 34 (5) ◽  
pp. 1180-1184 ◽  
Author(s):  
Larry E. Humes ◽  
Kathleen J. Nelson ◽  
David B. Pisoni

The Modified Rhyme Test (MRT), recorded using natural speech and two forms of synthetic speech, DECtalk and Votrax, was used to measure both open-set and closed-set speech-recognition performance. Performance of hearing-impaired elderly listeners was compared to two groups of young normal-hearing adults, one listening in quiet, and the other listening in a background of spectrally shaped noise designed to simulate the peripheral hearing loss of the elderly. Votrax synthetic speech yielded significant decrements in speech recognition compared to either natural or DECtalk synthetic speech for all three subject groups. There were no differences in performance between natural speech and DECtalk speech for the elderly hearing-impaired listeners or the young listeners with simulated hearing loss. The normal-hearing young adults listening in quiet out-performed both of the other groups, but there were no differences in performance between the young listeners with simulated hearing loss and the elderly hearing-impaired listeners. When the closed-set identification of synthetic speech was compared to its open-set recognition, the hearing-impaired elderly gained as much from the reduction in stimulus/response uncertainty as the two younger groups. Finally, among the elderly hearing-impaired listeners, speech-recognition performance was correlated negatively with hearing sensitivity, but scores were correlated positively among the different talker conditions. Those listeners with the greatest hearing loss had the most difficulty understanding speech and those having the most trouble understanding natural speech also had the greatest difficulty with synthetic speech.


2019 ◽  
Vol 30 (02) ◽  
pp. 131-144 ◽  
Author(s):  
Erin M. Picou ◽  
Todd A. Ricketts

AbstractPeople with hearing loss experience difficulty understanding speech in noisy environments. Beamforming microphone arrays in hearing aids can improve the signal-to-noise ratio (SNR) and thus also speech recognition and subjective ratings. Unilateral beamformer arrays, also known as directional microphones, accomplish this improvement using two microphones in one hearing aid. Bilateral beamformer arrays, which combine information across four microphones in a bilateral fitting, further improve the SNR. Early bilateral beamformers were static with fixed attenuation patterns. Recently adaptive, bilateral beamformers have been introduced in commercial hearing aids.The purpose of this article was to evaluate the potential benefits of adaptive unilateral and bilateral beamformers for improving sentence recognition and subjective ratings in a laboratory setting. A secondary purpose was to identify potential participant factors that explain some of the variability in beamformer benefit.Participants were fitted with study hearing aids equipped with commercially available adaptive unilateral and bilateral beamformers. Participants completed sentence recognition testing in background noise using three hearing aid settings (omnidirectional, unilateral beamformer, bilateral beamformer) and two noise source configurations (surround, side). After each condition, participants made subjective ratings of their perceived work, desire to control the situation, willingness to give up, and tiredness.Eighteen adults (50–80 yr, M = 66.2, σ = 8.6) with symmetrical mild sloping to severe hearing loss participated.Sentence recognition scores and subjective ratings were analyzed separately using generalized linear models with two within-subject factors (hearing aid microphone and noise configuration). Two benefit scores were calculated: (1) unilateral beamformer benefit (relative to performance with omnidirectional) and (2) additional bilateral beamformer benefit (relative to performance with unilateral beamformer). Hierarchical multiple linear regression was used to determine if beamformer benefit was associated with participant factors (age, degree of hearing loss, unaided speech in noise ability, spatial release from masking, and performance in omnidirectional).Sentence recognition and subjective ratings of work, control, and tiredness were better with both types of beamformers relative to the omnidirectional conditions. In addition, the bilateral beamformer offered small additional improvements relative to the unilateral beamformer in terms of sentence recognition and subjective ratings of tiredness. Speech recognition performance and subjective ratings were generally independent of noise configuration. Performance in the omnidirectional setting and pure-tone average were independently related to unilateral beamformer benefits. Those with the lowest performance or the largest degree of hearing loss benefited the most. No factors were significantly related to additional bilateral beamformer benefit.Adaptive bilateral beamformers offer additional advantages over adaptive unilateral beamformers in hearing aids. The small additional advantages with the adaptive beamformer are comparable to those reported in the literature with static beamformers. Although the additional benefits are small, they positively affected subjective ratings of tiredness. These data suggest that adaptive bilateral beamformers have the potential to improve listening in difficult situations for hearing aid users. In addition, patients who struggle the most without beamforming microphones may also benefit the most from the technology.


2011 ◽  
Vol 18 (1) ◽  
pp. 23-34 ◽  
Author(s):  
Denise Tucker ◽  
Mary V. Compton ◽  
Lyn Mankoff ◽  
Kelly Rulison

In this article, the authors describe a biopsychosocial approach to group audiologic/aural rehabilitation (AR) for late deafened adults with cochlear implants. A detailed account of Cochlear Implant Connections is provided, including the individual components of this innovative program. A qualitative review of written narratives and videotaped AR sessions illustrates the ongoing needs, expectations, challenges, and experiences of this underserved patient population. These challenges underscore the need for ongoing AR, including instruction, counseling, and support, to promote late deafened adults’ psychosocial adjustment and to maximize their peripheral and central auditory adaptation to cochlear implant use.


Sign in / Sign up

Export Citation Format

Share Document