scholarly journals Effects of Age and Hearing Loss on the Discrimination of Amplitude and Frequency Modulation for 2- and 10-Hz Rates

2019 ◽  
Vol 23 ◽  
pp. 233121651985396 ◽  
Author(s):  
Brian C. J. Moore ◽  
Sashi Mariathasan ◽  
Aleksander P. Sęk

Detection of frequency modulation (FM) with rate = 10 Hz may depend on conversion of FM to amplitude modulation (AM) in the cochlea, while detection of 2-Hz FM may depend on the use of temporal fine structure (TFS) information. TFS processing may worsen with greater age and hearing loss while AM processing probably does not. A two-stage experiment was conducted to test these ideas while controlling for the effects of detection efficiency. Stage 1 measured psychometric functions for the detection of AM alone and FM alone imposed on a 1-kHz carrier, using 2- and 10-Hz rates. Stage 2 assessed the discrimination of AM from FM at the same modulation rate when the detectability of the AM alone and FM alone was equated. Discrimination was better for the 2-Hz than for the 10-Hz rate for all young normal-hearing subjects and for some older subjects with normal hearing at 1 kHz. Other older subjects with normal hearing showed no clear difference in AM-FM discrimination for the 2- and 10-Hz rates, as was the case for most older hearing-impaired subjects. The results suggest that the ability to use TFS cues is reduced for some older people and most hearing-impaired people.

1981 ◽  
Vol 24 (1) ◽  
pp. 108-112 ◽  
Author(s):  
P. M. Zurek ◽  
C. Formby

Thresholds for frequency modulation were measured by an adaptive, two-alternative, forced-choice method with ten listeners: eight who showed varying degrees of sensorineural hearing impairment, and two with normal-hearing sensitivity. Results for test frequencies spaced at octave intervals between 125 and 4000 Hz showed that, relative to normal-hearing listeners, the ability of the hearing-impaired listeners to detect a sinusoidal frequency modulation: (1) is diminished above a certain level of hearing loss; and (2) is more disrupted for low-frequency tones than for high-frequency tones, given the same degree of hearing loss at the test frequency. The first finding is consistent with that of previous studies which show a general deterioration of frequency-discrimination ability associated with moderate, or worse, hearing loss. It is proposed that the second finding may be explained: 1) by differential impairment of the temporal and place mechanisms presumed to, encode pitch at the lower and higher frequencies, respectively; and/or, 2) for certain configurations of hearing loss, by the asymmetrical pattern of cochlear excitation that may lead to the underestimation, from measurements of threshold sensitivity, of hearing impairment for low-frequency tones and consequently to relatively large changes in frequency discrimination for small shifts in hearing threshold.


2014 ◽  
Vol 57 (5) ◽  
pp. 1961-1971
Author(s):  
Marianna Vatti ◽  
Sébastien Santurette ◽  
Niels Henrik Pontoppidan ◽  
Torsten Dau

Purpose Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM cues. Method Vibrato maps were obtained in 14 NH and 12 HI listeners with different degrees of musical experience. The FM rate and FM excursion of a synthesized vowel, to which coherent FM was applied, were adjusted until a singing voice emerged. Results In NH listeners, adding FM to the steady vowel components produced perception of a singing voice for FM rates between 4.1 and 7.5 Hz and FM excursions between 17 and 83 cents on average. In contrast, HI listeners showed substantially broader vibrato maps. Individual differences in map boundaries were, overall, not correlated with audibility or frequency selectivity at the vowel fundamental frequency, with no clear effect of musical experience. Conclusion Overall, it was shown that hearing loss affects the perception of a sung vowel based on FM-rate and FM-excursion cues, possibly due to deficits in FM detection or discrimination or to a degraded ability to follow the rate of frequency changes.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1984 ◽  
Vol 27 (1) ◽  
pp. 12-19 ◽  
Author(s):  
Shlomo Silman ◽  
Carol Ann Silverman ◽  
Theresa Showers ◽  
Stanley A. Gelfand

The effect of age on accuracy of prediction of hearing impairment with the bivariate-plotting procedure was investigated in 72 normal-hearing subjects aged 20–69 years and in 86 sensorineural hearing-impaired subjects aged 20–83 years. The predictive accuracy with the bivariate-plotting procedure improved markedly when the data from subjects over 44 years of age were excluded from the bivariate plot. The predictive accuracy improved further when the construction of the line segments in the traditional bivariate plot was modified.


1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


2005 ◽  
Vol 16 (04) ◽  
pp. 250-261 ◽  
Author(s):  
Samantha M. Lewis ◽  
Michael Valente ◽  
Jane Enrietto Horn ◽  
Carl Crandell

Hearing impairment has been associated with decline in psychosocial function. Previous investigations have reported that the utilization of hearing aids can ameliorate these reductions in psychosocial function. To date, few investigations have examined the effects of frequency modulation technology on hearing handicap, adjustment to hearing loss, and communicative strategies. The purpose of this investigation was to examine these effects and to compare them to the benefits obtained when using hearing aids alone. Subjects ranged in age from 34 to 81 years and had mean pure-tone thresholds consistent with a bilateral moderate to severe sloping sensorineural hearing loss. All subjects wore hearing aids only and hearing aids plus FM system in a randomized fashion. The Communication Profile for the Hearing Impaired (CPHI) was administered prior to fitting the study devices and once a month for three months in each of the two conditions. A statistically significant difference between device conditions was obtained for the Importance of Communication in Work Situations subscale. Additionally, statistically significant differences over time were noted in several CPHI subscales. Despite statistical significance, none of these results were clinically significant. The implications of these results will be discussed.


Author(s):  
Elina Nirgianaki ◽  
Maria Bitzanaki

The present study investigates the acoustic characteristics of Greek vowels produced by hearing-impaired children with profound prelingual hearing loss and cochlear implants. The results revealed a significant difference between vowels produced by hearingimpaired children and those produced by normal-hearing ones in terms of duration. Stressed vowels were significantly longer than non-stressed for both groups, while F0, F1 and F2 did not differ significantly between the two groups for any vowel, with the exception of /a/, which had significantly higher F1 when produced by hearingimpaired children. Acoustic vowel spaces were similar for the two groups but shifted towards higher frequencies in the low-high dimension and somehow reduced in the front-back dimension for the hearing-impaired group.


2021 ◽  
Author(s):  
Marlies Gillis ◽  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractWe investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.


Sign in / Sign up

Export Citation Format

Share Document