F0 Processing and the Seperation of Competing Speech Signals by Listeners With Normal Hearing and With Hearing Loss

1998 ◽  
Vol 41 (6) ◽  
pp. 1294-1306 ◽  
Author(s):  
Van Summers ◽  
Marjorie R. Leek

Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labeling task with the F0 difference between concurrent vowels ranging between 0 and 4 semitones. Finally, speech recognition was tested for synthetic sentences in the presence of a competing synthetic voice with the same, a higher, or a lower F0. Normal-hearing listeners and hearing-impaired listeners with small F0-discrimination (ΔF0) thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task. Impaired listeners with high ΔF0 thresholds did not benefit from F0 differences between vowels. At the group level, normalhearing listeners benefited more than hearing-impaired listeners from F0 differences between competing signals on both the concurrent-vowel and sentence tasks. However, for individual listeners, ΔF0 thresholds and improvements in concurrent-vowel labeling based on F0 differences were only weakly associated with F0-based improvements in performance on the sentence task. For both the concurrent-vowel and sentence tasks, there was evidence that the ability to benefit from F0 differences between competing signals decreases with age.

2013 ◽  
Vol 24 (04) ◽  
pp. 274-292 ◽  
Author(s):  
Van Summers ◽  
Matthew J. Makashay ◽  
Sarah M. Theodoroff ◽  
Marjorie R. Leek

Background: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. Purpose: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. Research Design: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. Study Sample: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). Results: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. Conclusions: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


1990 ◽  
Vol 33 (4) ◽  
pp. 726-735 ◽  
Author(s):  
Larry E. Humes ◽  
Lisa Roberts

The role that sensorineural hearing loss plays in the speech-recognition difficulties of the hearing-impaired elderly is examined. One approach to this issue was to make between-group comparisons of performance for three groups of subjects: (a) young normal-hearing adults; (b) elderly hearing-impaired adults; and (c) young normal-hearing adults with simulated sensorineural hearing loss equivalent to that of the elderly subjects produced by a spectrally shaped masking noise. Another approach to this issue employed correlational analyses to examine the relation between audibility and speech recognition within the group of elderly hearing-impaired subjects. An additional approach was pursued in which an acoustical index incorporating adjustments for threshold elevation was used to examine the role audibility played in the speech-recognition performance of the hearing-impaired elderly. A wide range of listening conditions was sampled in this experiment. The conclusion was that the primary determiner of speech-recognition performance in the elderly hearing-impaired subjects was their threshold elevation.


1995 ◽  
Vol 38 (1) ◽  
pp. 222-233 ◽  
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

The effect of amplitude-modulated (AM) noise on speech recognition in listeners with normal and impaired hearing was investigated in two experiments. In the first experiment, nonsense syllables were presented in high-pass steady-state or AM noise to determine whether the release from masking in AM noise relative to steady-state noise was significantly different between normal-hearing and hearing-impaired subjects when the two groups listened under equivalent masker conditions. The normal-hearing subjects were tested in the experimental noise under two conditions: (a) in a spectrally shaped broadband noise that produced pure tone thresholds equivalent to those of the hearing-impaired subjects, and (b) without the spectrally shaped broadband noise. The release from masking in AM noise was significantly greater for the normal-hearing group than for either the hearing-impaired or masked normal-hearing groups. In the second experiment, normal-hearing and hearing-impaired subjects identified nonsense syllables in isolation and target words in sentences in steady-state or AM noise adjusted to approximate the spectral shape and gain of a hearing aid prescription. The release from masking was significantly less for the subjects with impaired hearing. These data suggest that hearingimpaired listeners obtain less release from masking in AM noise than do normal-hearing listeners even when both the speech and noise are presented at levels that are above threshold over much of the speech frequency range.


2019 ◽  
Vol 62 (3) ◽  
pp. 758-767 ◽  
Author(s):  
Raymond L. Goldsworthy ◽  
Kali L. Markle

Purpose Speech recognition deteriorates with hearing loss, particularly in fluctuating background noise. This study examined how hearing loss affects speech recognition in different types of noise to clarify how characteristics of the noise interact with the benefits listeners receive when listening in fluctuating compared to steady-state noise. Method Speech reception thresholds were measured for a closed set of spondee words in children (ages 5–17 years) in quiet, speech-spectrum noise, 2-talker babble, and instrumental music. Twenty children with normal hearing and 43 children with hearing loss participated; children with hearing loss were subdivided into groups with cochlear implant (18 children) and hearing aid (25 children) groups. A cohort of adults with normal hearing was included for comparison. Results Hearing loss had a large effect on speech recognition for each condition, but the effect of hearing loss was largest in 2-talker babble and smallest in speech-spectrum noise. Children with normal hearing had better speech recognition in 2-talker babble than in speech-spectrum noise, whereas children with hearing loss had worse recognition in 2-talker babble than in speech-spectrum noise. Almost all subjects had better speech recognition in instrumental music compared to speech-spectrum noise, but with less of a difference observed for children with hearing loss. Conclusions Speech recognition is more sensitive to the effects of hearing loss when measured in fluctuating compared to steady-state noise. Speech recognition measured in fluctuating noise depends on an interaction of hearing loss with characteristics of the background noise; specifically, children with hearing loss were able to derive a substantial benefit for listening in fluctuating noise when measured in instrumental music compared to 2-talker babble.


2014 ◽  
Vol 25 (04) ◽  
pp. 355-366 ◽  
Author(s):  
Lauren Calandruccio ◽  
Ann R. Bradlow ◽  
Sumitrajit Dhar

Background: Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. Purpose: The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech affect masking release. Research Design: A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Study Sample: Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Data Collection and Analysis: Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Results: Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Conclusions: Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future.


2019 ◽  
Vol 62 (4) ◽  
pp. 1051-1067 ◽  
Author(s):  
Jonathan H. Venezia ◽  
Allison-Graham Martin ◽  
Gregory Hickok ◽  
Virginia M. Richards

Purpose Age-related sensorineural hearing loss can dramatically affect speech recognition performance due to reduced audibility and suprathreshold distortion of spectrotemporal information. Normal aging produces changes within the central auditory system that impose further distortions. The goal of this study was to characterize the effects of aging and hearing loss on perceptual representations of speech. Method We asked whether speech intelligibility is supported by different patterns of spectrotemporal modulations (STMs) in older listeners compared to young normal-hearing listeners. We recruited 3 groups of participants: 20 older hearing-impaired (OHI) listeners, 19 age-matched normal-hearing listeners, and 10 young normal-hearing (YNH) listeners. Listeners performed a speech recognition task in which randomly selected regions of the speech STM spectrum were revealed from trial to trial. The overall amount of STM information was varied using an up–down staircase to hold performance at 50% correct. Ordinal regression was used to estimate weights showing which regions of the STM spectrum were associated with good performance (a “classification image” or CImg). Results The results indicated that (a) large-scale CImg patterns did not differ between the 3 groups; (b) weights in a small region of the CImg decreased systematically as hearing loss increased; (c) CImgs were also nonsystematically distorted in OHI listeners, and the magnitude of this distortion predicted speech recognition performance even after accounting for audibility; and (d) YNH listeners performed better overall than the older groups. Conclusion We conclude that OHI/older normal-hearing listeners rely on the same speech STMs as YNH listeners but encode this information less efficiently. Supplemental Material https://doi.org/10.23641/asha.7859981


2002 ◽  
Vol 13 (05) ◽  
pp. 236-245 ◽  
Author(s):  
Gary Rance ◽  
Field Rickards

This retrospective study examines the relationship between auditory steady-state evoked potential (ASSEP) thresholds determined in infancy and subsequently obtained behavioral hearing levels in children with normal hearing or varying degrees of sensorineural hearing loss. Overall, the results from 211 subjects showed that the two test techniques were highly correlated, with Pearson r values exceeding .95 at each of the audiometric test frequencies between 500 and 4000 Hz. Analysis of the findings for babies with significant hearing loss (moderate to profound levels) showed similar threshold relationships to those obtained in previous studies involving adults and older children. The results for infants with normal or near-normal hearing did, however, differ from those reported for older subjects, with behavioral thresholds typically 10 to 15 dB better than would have been predicted from their ASSEP levels.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1984 ◽  
Vol 27 (1) ◽  
pp. 12-19 ◽  
Author(s):  
Shlomo Silman ◽  
Carol Ann Silverman ◽  
Theresa Showers ◽  
Stanley A. Gelfand

The effect of age on accuracy of prediction of hearing impairment with the bivariate-plotting procedure was investigated in 72 normal-hearing subjects aged 20–69 years and in 86 sensorineural hearing-impaired subjects aged 20–83 years. The predictive accuracy with the bivariate-plotting procedure improved markedly when the data from subjects over 44 years of age were excluded from the bivariate plot. The predictive accuracy improved further when the construction of the line segments in the traditional bivariate plot was modified.


1994 ◽  
Vol 95 (5) ◽  
pp. 2992-2993
Author(s):  
Laurie S. Eisenberg ◽  
Donald D. Dirks ◽  
Theodore S. Bell

Sign in / Sign up

Export Citation Format

Share Document