Temporal Loudness Integration and Spectral Loudness Summation in Normal-hearing and Hearing-impaired Listeners

1999 ◽  
Vol 119 (2) ◽  
pp. 154-157 ◽  
Author(s):  
Stéphane Garnier, Christophe Michey
1985 ◽  
Vol 28 (3) ◽  
pp. 445-448 ◽  
Author(s):  
Joseph W. Hall ◽  
Antony D.G. Harvey

Diotic loudness summation at 500 and 2000 Hz was measured in 10 normal-hearing and 10 cochlear-impaired listeners. Diotic stimuli were matched in loudness to monaural "standards" of 70, 80, and 90 dB SPL. Diotic loudness summation averaged about 9 dB at 500 Hz for both groups. At 2000 Hz, the hearing-impaired listeners showed reduced diotic loudness summation at the 70- and 80-dB levels, but showed normal diotic loudness summation (about 9 dB) at the 90-dB level. The results indicate that diotic loudness summation is normal in cochlear-impaired ears, provided that the stimuli are presented sufficiently above threshold.


1980 ◽  
Vol 23 (3) ◽  
pp. 646-669 ◽  
Author(s):  
Mary Florentine ◽  
Søren Buus ◽  
Bertram Scharf ◽  
Eberhard Zwicker

This study compares frequency selectivity—as measured by four different methods—in observers with normal hearing and in observers with conductive (non-otosclerotic), otosclerotic, noise-induced, or degenerative hearing losses. Each category of loss was represented by a group of 7 to 10 observers, who were tested at center frequencies of 500 Hz and 4000 Hz. For each group, the following four measurements were made: psychoacoustical tuning curves, narrow-band masking, two-tone masking, and loudness summation. Results showed that (a) frequency selectivity was reduced at frequencies where a cochlear hearing loss was present, (b) frequency selectivity was reduced regardless of the test level at which normally-hearing observers and observers with cochlear impairment were compared, (c) all four measures of frequency selectivity were significantly correlated and (d) reduced frequency selectivity was positively correlated with the amount of cochlear hearing loss.


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1979 ◽  
Vol 22 (2) ◽  
pp. 236-246 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Ruth M. Lawarre

Perceptual patterns in rating dissimilarities among 24 CVs were investigated for a group of normal-hearing and two groups of hearing-impaired subjects (one group with flat, and one group with sloping, sensorineural losses). Stimuli were presented binaurally at most comfortable loudness level and subjects rated the 576 paired stimuli on a 1–7 equal-appearing interval scale. Ratings were submitted to individual group and combined INDSCAL analyses to describe features used by the subjects in their perception of the speech stimuli. Results revealed features such as sibilant, sonorant, plosive and place. Furthermore, normal and hearing-impaired subjects used similar features, and subjects' weightings of features were relatively independent of their audiometric configurations. Results are compared to those of previous studies.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


Sign in / Sign up

Export Citation Format

Share Document