Minimum spectral contrast for vowel identification by normal‐hearing and hearing‐impaired listeners

1987 ◽  
Vol 81 (1) ◽  
pp. 148-154 ◽  
Author(s):  
Marjorie R. Leek ◽  
Michael F. Dorman ◽  
Quentin Summerfield
2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


2013 ◽  
Vol 22 (1) ◽  
pp. 94-104 ◽  
Author(s):  
Ashley Woodall ◽  
Chang Liu

Purpose The aim of this study was to determine whether increasing the overall speech level or the individual spectral contrasts of vowel sounds can improve vowel formant discrimination for listeners both with and without normal hearing. Method Thresholds of vowel formant discrimination were examined for the F2 frequencies of 3 American English vowels for listeners with and without normal hearing. Spectral contrasts of the F2 were enhanced by 3, 6, and 9 dB. Vowel stimuli were presented at 70 and 90 dB SPL. Results The thresholds of listeners with hearing impairment were reduced significantly after spectral enhancement was implemented, especially at 90 dB SPL, whereas normal-hearing listeners did not benefit from spectral enhancement. Conclusion These results indicate that a combination of spectral enhancement of F2 and high speech level is most beneficial to improve vowel formant discrimination for listeners with hearing impairment.


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


Sign in / Sign up

Export Citation Format

Share Document