Does hearing loss affect the use of information at different frequencies? Results from a simultaneous tonal pattern discrimination task in normal-hearing and hearing-impaired listeners

2017 ◽  
Vol 141 (5) ◽  
pp. 3902-3902
Author(s):  
Elin Roverud ◽  
Virginia Best ◽  
Judy R. Dubno ◽  
Christine Mason ◽  
Gerald Kidd
2020 ◽  
Vol 24 ◽  
pp. 233121652094551
Author(s):  
Elin Roverud ◽  
Judy R. Dubno ◽  
Gerald Kidd

Many listeners with sensorineural hearing loss have uneven hearing sensitivity across frequencies. This study addressed whether this uneven hearing loss leads to a biasing of attention to different frequency regions. Normal-hearing (NH) and hearing-impaired (HI) listeners performed a pattern discrimination task at two distant center frequencies (CFs): 750 and 3500 Hz. The patterns were sequences of pure tones in which each successive tonal element was randomly selected from one of two possible frequencies surrounding a CF. The stimuli were presented at equal sensation levels to ensure equal audibility. In addition, the frequency separation of the tonal elements within a pattern was adjusted for each listener so that equal pattern discrimination performance was obtained for each CF in quiet. After these adjustments, the pattern discrimination task was performed under conditions in which independent patterns were presented at both CFs simultaneously. The listeners were instructed to attend to the low or high CF before the stimulus (assessing selective attention to frequency with instruction) or after the stimulus (divided attention, assessing inherent frequency biases). NH listeners demonstrated approximately equal performance decrements (re: quiet) between the two CFs. HI listeners demonstrated much larger performance decrements at the 3500 Hz CF than at the 750 Hz CF in combined-presentation conditions for both selective and divided attention conditions, indicating a low-frequency attentional bias that is apparently not under subject control. Surprisingly, the magnitude of this frequency bias was not related to the degree of asymmetry in thresholds at the two CFs.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1984 ◽  
Vol 27 (1) ◽  
pp. 12-19 ◽  
Author(s):  
Shlomo Silman ◽  
Carol Ann Silverman ◽  
Theresa Showers ◽  
Stanley A. Gelfand

The effect of age on accuracy of prediction of hearing impairment with the bivariate-plotting procedure was investigated in 72 normal-hearing subjects aged 20–69 years and in 86 sensorineural hearing-impaired subjects aged 20–83 years. The predictive accuracy with the bivariate-plotting procedure improved markedly when the data from subjects over 44 years of age were excluded from the bivariate plot. The predictive accuracy improved further when the construction of the line segments in the traditional bivariate plot was modified.


1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


Author(s):  
Elina Nirgianaki ◽  
Maria Bitzanaki

The present study investigates the acoustic characteristics of Greek vowels produced by hearing-impaired children with profound prelingual hearing loss and cochlear implants. The results revealed a significant difference between vowels produced by hearingimpaired children and those produced by normal-hearing ones in terms of duration. Stressed vowels were significantly longer than non-stressed for both groups, while F0, F1 and F2 did not differ significantly between the two groups for any vowel, with the exception of /a/, which had significantly higher F1 when produced by hearingimpaired children. Acoustic vowel spaces were similar for the two groups but shifted towards higher frequencies in the low-high dimension and somehow reduced in the front-back dimension for the hearing-impaired group.


2021 ◽  
Author(s):  
Marlies Gillis ◽  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractWe investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.


2019 ◽  
Vol 23 ◽  
pp. 233121651985396 ◽  
Author(s):  
Brian C. J. Moore ◽  
Sashi Mariathasan ◽  
Aleksander P. Sęk

Detection of frequency modulation (FM) with rate = 10 Hz may depend on conversion of FM to amplitude modulation (AM) in the cochlea, while detection of 2-Hz FM may depend on the use of temporal fine structure (TFS) information. TFS processing may worsen with greater age and hearing loss while AM processing probably does not. A two-stage experiment was conducted to test these ideas while controlling for the effects of detection efficiency. Stage 1 measured psychometric functions for the detection of AM alone and FM alone imposed on a 1-kHz carrier, using 2- and 10-Hz rates. Stage 2 assessed the discrimination of AM from FM at the same modulation rate when the detectability of the AM alone and FM alone was equated. Discrimination was better for the 2-Hz than for the 10-Hz rate for all young normal-hearing subjects and for some older subjects with normal hearing at 1 kHz. Other older subjects with normal hearing showed no clear difference in AM-FM discrimination for the 2- and 10-Hz rates, as was the case for most older hearing-impaired subjects. The results suggest that the ability to use TFS cues is reduced for some older people and most hearing-impaired people.


1981 ◽  
Vol 24 (1) ◽  
pp. 108-112 ◽  
Author(s):  
P. M. Zurek ◽  
C. Formby

Thresholds for frequency modulation were measured by an adaptive, two-alternative, forced-choice method with ten listeners: eight who showed varying degrees of sensorineural hearing impairment, and two with normal-hearing sensitivity. Results for test frequencies spaced at octave intervals between 125 and 4000 Hz showed that, relative to normal-hearing listeners, the ability of the hearing-impaired listeners to detect a sinusoidal frequency modulation: (1) is diminished above a certain level of hearing loss; and (2) is more disrupted for low-frequency tones than for high-frequency tones, given the same degree of hearing loss at the test frequency. The first finding is consistent with that of previous studies which show a general deterioration of frequency-discrimination ability associated with moderate, or worse, hearing loss. It is proposed that the second finding may be explained: 1) by differential impairment of the temporal and place mechanisms presumed to, encode pitch at the lower and higher frequencies, respectively; and/or, 2) for certain configurations of hearing loss, by the asymmetrical pattern of cochlear excitation that may lead to the underestimation, from measurements of threshold sensitivity, of hearing impairment for low-frequency tones and consequently to relatively large changes in frequency discrimination for small shifts in hearing threshold.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


Sign in / Sign up

Export Citation Format

Share Document