Detection of a temporal gap in low‐frequency narrow‐band signals by normal‐hearing and hearing‐impaired listeners

1986 ◽  
Vol 80 (5) ◽  
pp. 1354-1358 ◽  
Author(s):  
Carol Lee De Filippo ◽  
Karen Black Snell
1981 ◽  
Vol 24 (1) ◽  
pp. 108-112 ◽  
Author(s):  
P. M. Zurek ◽  
C. Formby

Thresholds for frequency modulation were measured by an adaptive, two-alternative, forced-choice method with ten listeners: eight who showed varying degrees of sensorineural hearing impairment, and two with normal-hearing sensitivity. Results for test frequencies spaced at octave intervals between 125 and 4000 Hz showed that, relative to normal-hearing listeners, the ability of the hearing-impaired listeners to detect a sinusoidal frequency modulation: (1) is diminished above a certain level of hearing loss; and (2) is more disrupted for low-frequency tones than for high-frequency tones, given the same degree of hearing loss at the test frequency. The first finding is consistent with that of previous studies which show a general deterioration of frequency-discrimination ability associated with moderate, or worse, hearing loss. It is proposed that the second finding may be explained: 1) by differential impairment of the temporal and place mechanisms presumed to, encode pitch at the lower and higher frequencies, respectively; and/or, 2) for certain configurations of hearing loss, by the asymmetrical pattern of cochlear excitation that may lead to the underestimation, from measurements of threshold sensitivity, of hearing impairment for low-frequency tones and consequently to relatively large changes in frequency discrimination for small shifts in hearing threshold.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Roland Mühler ◽  
Katrin Mentzel ◽  
Jesko Verhey

This paper describes the estimation of hearing thresholds in normal-hearing and hearing-impaired subjects on the basis of multiple-frequency auditory steady-state responses (ASSRs). The ASSR was measured using two new techniques: (i) adaptive stimulus patterns and (ii) narrow-band chirp stimuli. ASSR thresholds in 16 normal-hearing and 16 hearing-impaired adults were obtained simultaneously at both ears at 500, 1000, 2000, and 4000?Hz, using a multiple-frequency stimulus built up of four one-octave-wide narrow-band chirps with a repetition rate of 40?Hz. A statistical test in the frequency domain was used to detect the response. The recording of the steady-state responses was controlled in eight independent recording channels with an adaptive, semiautomatic algorithm. The average differences between the behavioural hearing thresholds and the ASSR threshold estimate were 10, 8, 13, and 15?dB for test frequencies of 500, 1000, 2000, and 4000?Hz, respectively. The average overall test duration of 18.6 minutes for the threshold estimations at the four frequencies and both ears demonstrates the benefit of an adaptive recording algorithm and the efficiency of optimised narrow-band chirp stimuli.


1980 ◽  
Vol 23 (3) ◽  
pp. 646-669 ◽  
Author(s):  
Mary Florentine ◽  
Søren Buus ◽  
Bertram Scharf ◽  
Eberhard Zwicker

This study compares frequency selectivity—as measured by four different methods—in observers with normal hearing and in observers with conductive (non-otosclerotic), otosclerotic, noise-induced, or degenerative hearing losses. Each category of loss was represented by a group of 7 to 10 observers, who were tested at center frequencies of 500 Hz and 4000 Hz. For each group, the following four measurements were made: psychoacoustical tuning curves, narrow-band masking, two-tone masking, and loudness summation. Results showed that (a) frequency selectivity was reduced at frequencies where a cochlear hearing loss was present, (b) frequency selectivity was reduced regardless of the test level at which normally-hearing observers and observers with cochlear impairment were compared, (c) all four measures of frequency selectivity were significantly correlated and (d) reduced frequency selectivity was positively correlated with the amount of cochlear hearing loss.


1988 ◽  
Vol 31 (4) ◽  
pp. 659-669 ◽  
Author(s):  
Glenis R. Long ◽  
John K. Cullen

Estimates of threshold, wide- and narrow-band noise masking, frequency and amplitude modulation detection, gap detection, and rate discrimination were obtained from 10 subjects with near-normal hearing at frequencies above 6 kHz, but severe-to-profound hearing losses at lower frequencies. The same measures were obtained from 10 young control subjects with normal hearing sensitivity for all frequencies up to 16 kHz. The hearing-impaired subjects were able to process sounds in the region of near-normal hearing sensitivity as well as the unimpaired control subjects. Performance in the low-frequency, impaired region depended on the lowest frequency of near-normal hearing sensitivity.


1971 ◽  
Vol 14 (3) ◽  
pp. 496-512 ◽  
Author(s):  
Norman P. Erber

Common words (monosyllables, trochees, spondees) were presented in low-frequency noise to children who attempted to detect their acoustic patterns or to recognize them under a range of acoustic speech-to-noise (S/N) ratios. Both profoundly deaf (−10 dB) and severely hearing-impaired children (−17 dB) required higher S/N ratios for auditory detection of words than did children with normal hearing (−23 dB). The normals (92%) were superior to the severely hearing-impaired group (57%) in auditory recognition of words in noise, while the deaf group (3%) were unable to recognize words by ear alone. The deaf group were poor even at classifying the stimulus words by stress pattern. Provision of acoustic cues increased the audio-visual (AV) scores of normal-hearing and severely hearing-impaired subjects 54% and 33% respectively above lipreading alone, but it improved the lipreading performance of profoundly deaf subjects only 9%. Improvement in AV recognition depended for all groups upon their detection of acoustic cues for speech. The profoundly deaf children achieved their maximum AV scores only at a higher S/N ratio (+5 dB) than that for the severely hearing-impaired group (0 dB), who in turn required a higher S/N ratio for maximum AV recognition than did the normals (−10 dB).


1991 ◽  
Vol 34 (6) ◽  
pp. 1233-1249 ◽  
Author(s):  
David A. Nelson

Forward-masked psychophysical tuning curves (PTCs) were obtained for 1000-Hz probe tones at multiple probe levels from one ear of 26 normal-hearing listeners and from 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Comparisons between normal-hearing and hearing-impaired PTCs were made at equivalent masker levels near the tips of PTCs. Comparisons were also made of PTC characteristics obtained by fitting each PTC with three straight-line segments using least-squares fitting procedures. Abnormal frequency resolution was revealed only as abnormal downward spread of masking. The low-frequency slopes of PTCs from hearing-impaired listeners were not different from those of normal-hearing listeners. That is, hearing-impaired listeners did not demonstrate abnormal upward spread of masking when equivalent masker levels were compared. Ten hearing-impaired ears demonstrated abnormally broad PTCs, due exclusively to reduced high-frequency slopes in their PTCs. This abnormal downward spread of masking was observed only in listeners with hearing losses greater than 40 dB HL. From these results, it would appear that some, but not all, cochlear hearing losses greater than 40dB HL influence the sharp tuning capabilities usually associated with outer hair cell function.


1991 ◽  
Vol 34 (6) ◽  
pp. 1397-1409 ◽  
Author(s):  
Carol Goldschmidt Hustedde ◽  
Terry L. Wiley

Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory—Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal hearing and hearing-impaired listeners.


1990 ◽  
Vol 33 (2) ◽  
pp. 290-297 ◽  
Author(s):  
Patricia G. Stelmachowicz ◽  
Dawna E. Lewis ◽  
William J. Kelly ◽  
Walt Jesteadt

Two experiments were conducted concerning speech perception in noise. In Experiment 1, a comparison was made between adaptive and fixed-level procedures to estimate the S/N ratio at which 50% correct performance occurred for nonsense syllables for normal-hearing listeners. The two methods yield similar S/N ratio estimates, but the consonant confusions found with the fixed-level method could not be predicted accurately from the adaptive procedure. In Experiment 2, the adaptive procedure was used to estimate the S/N ratio for a 50% performance level in low-pass filtered noise with a range of cutoff frequencies. Data were obtained from 5 normal-hearing listeners at two speech levels (50 and 75 dB SPL) and 4 hearing-impaired listeners at one speech level (75 dB SPL). The hearing-impaired listeners required a better S/N ratio than the normal listeners at either presentation level for all except the widest bandwidth, where their S/N ratios began to converge with the normal values. In addition, the S/N ratios for the hearing-impaired listeners plateaued at relatively narrow bandwidths (0.75 to 2.5 kHz) compared to the normal-hearing group (3.0 to 5.0 kHz). That is, the addition of high-frequency components to the noise did not alter performance. These findings suggest that the hearing-impaired listeners may have relied upon either low-frequency cues or prosodic cues in the perception of these test items.


Sign in / Sign up

Export Citation Format

Share Document