scholarly journals Comparing Speech Recognition for Listeners With Normal and Impaired Hearing: Simulations for Controlling Differences in Speech Levels and Spectral Shape

2020 ◽  
Vol 63 (12) ◽  
pp. 4289-4299
Author(s):  
Daniel Fogerty ◽  
Rachel Madorskiy ◽  
Jayne B. Ahlstrom ◽  
Judy R. Dubno

Purpose This study investigated methods used to simulate factors associated with reduced audibility, increased speech levels, and spectral shaping for aided older adults with hearing loss. Simulations provided to younger normal-hearing adults were used to investigate the effect of sensation level, speech presentation level, and spectral shape in comparison to older adults with hearing loss. Method Measures were assessed in quiet, steady-state noise, and speech-modulated noise. Older adults with hearing loss listened to speech that was spectrally shaped according to their hearing thresholds. Younger adults with normal hearing listened to speech that simulated the hearing-impaired group's (a) reduced audibility, (b) increased speech levels, and (c) spectral shaping. Group comparisons were made based on speech recognition performance and masking release. Additionally, younger adults completed measures of listening effort and perceived speech quality to assess if differences across simulations in these outcome measures were similar to those for speech recognition. Results Across the various simulations employed, testing in the presence of a threshold matching noise best matched differences in speech recognition and masking release between younger and older adults. This result remained consistent across the other two outcome measures. Conclusions A combination of audibility, speech level, and spectral shape factors is required to simulate differences between listeners with normal and impaired hearing in recognition, listening effort, and perceived speech quality. The use of spectrally shaped and amplified speech in the presence of threshold matching noise best provided this simulated control. Supplemental Material https://doi.org/10.23641/asha.13224632

2012 ◽  
Vol 55 (3) ◽  
pp. 838-847 ◽  
Author(s):  
Megan J. McAuliffe ◽  
Phillipa J. Wilding ◽  
Natalie A. Rickard ◽  
Greg A. O'Beirne

Purpose Older adults exhibit difficulty understanding speech that has been experimentally degraded. Age-related changes to the speech mechanism lead to natural degradations in signal quality. We tested the hypothesis that older adults with hearing loss would exhibit declines in speech recognition when listening to the speech of older adults, compared with the speech of younger adults, and would report greater amounts of listening effort in this task. Method Nineteen individuals with age-related hearing loss completed speech recognition and listening effort scaling tasks. Both were conducted in quiet, when listening to high- and low-predictability phrases produced by younger and older speakers, respectively. Results No significant difference in speech recognition existed when stimuli were derived from younger or older speakers. However, perceived effort was significantly higher when listening to speech from older adults, as compared with younger adults. Conclusions For older individuals with hearing loss, natural degradations in signal quality may require greater listening effort. However, they do not interfere with speech recognition—at least in quiet. Follow-up investigation of the effect of speaker age on speech recognition and listening effort under more challenging noise conditions appears warranted.


2004 ◽  
Vol 47 (5) ◽  
pp. 965-978 ◽  
Author(s):  
Richard A. Roberts ◽  
Jennifer J. Lister

Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection paradigm. This task is similar to binaural fusion paradigms used to measure the precedence effect. The purpose of this investigation was to determine if within-channel (measured with monotic and diotic gap detection) or across-channel (measured with dichotic gap detection) temporal resolution is related to fusion (measured with lag-burst thresholds; LBTs) under dichotic, anechoic, and reverberant conditions. Gap-detection thresholds (GDTs) and LBTs were measured by means of noise-burst stimuli for 3 groups of listeners: young adults with normal-hearing sensitivity (YNH), older adults with normal-hearing sensitivity (ONH), and older adults with impaired-hearing sensitivity (OIH). The GDTs indicated that across-channel temporal resolution is poorer than within-channel temporal resolution and that the effects of age and hearing loss are dependent on condition. Results for the fusion task indicated higher LBTs in reverberation than for the dichotic and anechoic conditions, regardless of group, and no effect of age or hearing loss for the nonreverberant conditions. However, higher LBTs were observed in the reverberant condition for the ONH listeners. Further, there was a correlation between across-channel temporal resolution and fusion in reverberation. Gap detection and fusion may not necessarily reflect the same underlying processes; however, across-channel gap detection may influence fusion under certain conditions (i.e., in reverberation).


2019 ◽  
Vol 28 (3S) ◽  
pp. 756-761 ◽  
Author(s):  
Fatima Tangkhpanya ◽  
Morgane Le Carrour ◽  
Félicia Doucet ◽  
Jean-Pierre Gagné

Speech processing is more effortful under difficult listening conditions. Using a dual-task paradigm, it has been shown that older adults deploy more listening effort than younger adults when performing a speech recognition task in noise. Purpose The primary purpose of this study was to investigate whether a dual-task paradigm could be used to investigate differences in listening effort for an audiovisual speech comprehension task. If so, it was predicted that older adults would expend more listening effort than younger adults. Method Three groups of participants took part in the investigation: (a) young normal-hearing adults, (b) young normal-hearing adults listening to the speech material low-pass filtered above 3 kHz, and (c) older adults with age-related normal hearing sensitivity or better. A dual-task paradigm was used to measure listening effort. The primary task consisted of comprehending a short documentary presented at 63 dBA in a background noise that consisted of a 4-talker speech babble presented at 69 dBA. The participants had to answer a set of 15 questions related to the content of the documentary. The secondary task was a tactile detection task presented at a random time interval, over a 12-min period (approximately 8 stimuli/min). Each task was performed separately and concurrently. Results The younger participants who performed the listening task under the low-pass filtered condition displayed significantly more listening effort than the 2 other groups of participants. Conclusion First, the study confirmed that the dual-task paradigm used in this study was sufficiently sensitive to reveal significant differences in listening effort for a speech comprehension task across 3 groups of participants. Contrary to our prediction, it was the group of young normal-hearing participants who listened to the documentaries under the low-pass filtered condition that displayed significantly more listening effort than the other 2 groups of listeners.


2016 ◽  
Vol 59 (5) ◽  
pp. 1218-1232 ◽  
Author(s):  
Dawna Lewis ◽  
Kendra Schmid ◽  
Samantha O'Leary ◽  
Jody Spalding ◽  
Elizabeth Heinrichs-Graham ◽  
...  

Purpose This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5–12 years of age) with NH (Experiment 1) and children (8–12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). Results In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Conclusions Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.


2021 ◽  
Vol 50 (Supplement_1) ◽  
pp. i7-i11
Author(s):  
S Rafnsson ◽  
A Maharani ◽  
G Tampubolon

Abstract Introduction Frequent social contact benefits cognition in later life although evidence is lacking on the potential importance of the modes chosen by older adults for interacting with others in their social network. Method 11,513 participants in the English Longitudinal Study of Ageing (ELSA) provided baseline information on hearing status and social contact mode and frequency of use. Multilevel growth curve models compared episodic memory (immediate and delayed recall) at baseline and long-term in participants who interacted frequently (offline only or offline and online combined), compared to infrequently, with others in their social network. Results Frequent offline (β = 0.29; p < 0.05) and combined offline and online (β = 0.76; p < 0.001) social interactions predicted better episodic memory after adjustment for multiple confounding factors. We observed positive long-term influences of combined offline and online interactions on memory in participants without hearing loss (β = 0.48, p = 0.001) but not of strictly offline interactions (β = 0.00, p = 0.970). In those with impaired hearing, long-term memory was positively influenced by both modes of engagement (offline only: β = 0.93, p < 0.001; combined online and offline: β = 1.47, p < 0.001). Sensitivity analyses confirmed the robustness of these findings. Conclusion Supplementing conventional social interactions with online communication modes may help older adults, especially those living with hearing loss, sustain, and benefit cognitively from, personal relationships.


Author(s):  
Mark A. Eckert ◽  
Susan Teubner-Rhodes ◽  
Kenneth I. Vaden ◽  
Jayne B. Ahlstrom ◽  
Carolyn M. McClaskey ◽  
...  

2019 ◽  
Vol 59 (4) ◽  
pp. 254-262 ◽  
Author(s):  
Maria Huber ◽  
Sebastian Roesch ◽  
Belinda Pletzer ◽  
Julia Lukaschyk ◽  
Anke Lesinski-Schiedat ◽  
...  

2019 ◽  
Vol 30 (07) ◽  
pp. 564-578
Author(s):  
Oscar M. Cañete ◽  
Suzanne C. Purdy ◽  
Colin R. S. Brown ◽  
Michel Neeff ◽  
Peter R. Thorne

AbstractA unilateral hearing loss (UHL) can have a significant functional and social impact on children and adults, affecting their quality of life. In adults, UHL is typically associated with difficulties understanding speech in noise and sound localization, and UHL increases the self-perception of auditory disability for a range of listening situations. Furthermore, despite evidence for the negative effects of reduced unilateral auditory input on the neural encoding of binaural cues, the perceptual consequences of these changes are still not well understood.Determine effects of UHL on auditory abilities and speech-evoked cortical auditory evoked potentials (CAEPs).CAEPs, sound localization, speech perception in noise and self-perception of auditory abilities (speech, spatial, and qualities hearing scale) were assessed.Thirteen adults with UHL with a range of etiologies, duration of hearing loss, and severity and a control group of eleven binaural listeners with normal hearing.Participants with UHL varied greatly in their ability to localize sound and reported speech recognition and listening effort were the greatest problem. There was a greater effect of right ear than left ear hearing loss on N1 amplitude hemispheric asymmetry and N1 latencies evoked by speech syllables in noise. As duration of hearing loss increased, contralateral dominance (N1 amplitude asymmetry) decreased. N1 amplitudes correlated with speech scores, larger N1 amplitudes were associated with better speech recognition in noise scores. N1 latencies are delayed (in the better ear) and amplitude hemisphere asymmetry differed across UHL participants as function of side of deafness, mainly for right-sided deafness.UHL affects a range of auditory abilities, including speech detection in noise, sound localization, and self-perceived hearing disability. CAEPs elicited by speech sounds are sensitive enough to evidence changes within the auditory cortex due to an UHL.


Sign in / Sign up

Export Citation Format

Share Document