A Clinically Feasible Method for Determining Frequency Resolution

1988 ◽  
Vol 31 (2) ◽  
pp. 299-303 ◽  
Author(s):  
Stephanie A. Davidson ◽  
William Melnick

Psychophysical tuning curves were generated by normally hearing and hearing-impaired subjects using two methods; a detailed laboratory method and a Bekesy method proposed as suitable for clinical use. The two methods were compared for stability, the amount of masking produced and the pattern of the masking functions. The two measures of frequency resolution were found to be equally reliable and showed the same range of repeatability as simple pure tone thresholds. The patterns of the masking functions were similar regardless of the method used. However, the absolute amounts of masking indicated with each method were significantly different, with more masking obtained when the clinical method was used.

1991 ◽  
Vol 34 (6) ◽  
pp. 1233-1249 ◽  
Author(s):  
David A. Nelson

Forward-masked psychophysical tuning curves (PTCs) were obtained for 1000-Hz probe tones at multiple probe levels from one ear of 26 normal-hearing listeners and from 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Comparisons between normal-hearing and hearing-impaired PTCs were made at equivalent masker levels near the tips of PTCs. Comparisons were also made of PTC characteristics obtained by fitting each PTC with three straight-line segments using least-squares fitting procedures. Abnormal frequency resolution was revealed only as abnormal downward spread of masking. The low-frequency slopes of PTCs from hearing-impaired listeners were not different from those of normal-hearing listeners. That is, hearing-impaired listeners did not demonstrate abnormal upward spread of masking when equivalent masker levels were compared. Ten hearing-impaired ears demonstrated abnormally broad PTCs, due exclusively to reduced high-frequency slopes in their PTCs. This abnormal downward spread of masking was observed only in listeners with hearing losses greater than 40 dB HL. From these results, it would appear that some, but not all, cochlear hearing losses greater than 40dB HL influence the sharp tuning capabilities usually associated with outer hair cell function.


2021 ◽  
Vol 25 ◽  
pp. 233121652110161
Author(s):  
Michal Fereczkowski ◽  
Torsten Dau ◽  
Ewen N. MacDonald

While an audiogram is a useful method of characterizing hearing loss, it has been suggested that including a complementary, suprathreshold measure, for example, a measure of the status of the cochlear active mechanism, could lead to improved diagnostics and improved hearing-aid fitting in individual listeners. While several behavioral and physiological methods have been proposed to measure the cochlear-nonlinearity characteristics, evidence of a good correspondence between them is lacking, at least in the case of hearing-impaired listeners. If this lack of correspondence is due to, for example, limited reliability of one of such measures, it might be a reason for limited evidence of the benefit of measuring peripheral compression. The aim of this study was to investigate the relation between measures of the peripheral-nonlinearity status estimated using two psychoacoustical methods (based on the notched-noise and temporal-masking curve methods) and otoacoustic emissions, on a large sample of hearing-impaired listeners. While the relation between the estimates from the notched-noise and the otoacoustic emissions experiments was found to be stronger than predicted by the audiogram alone, the relations between the two measures and the temporal-masking based measure did not show the same pattern, that is, the variance shared by any of the two measures with the temporal-masking curve-based measure was also shared with the audiogram.


2002 ◽  
Vol 87 (1) ◽  
pp. 122-139 ◽  
Author(s):  
Mark Jude Tramo ◽  
Gaurav D. Shah ◽  
Louis D. Braida

Microelectrode studies in nonhuman primates and other mammals have demonstrated that many neurons in auditory cortex are excited by pure tone stimulation only when the tone's frequency lies within a narrow range of the audible spectrum. However, the effects of auditory cortex lesions in animals and humans have been interpreted as evidence against the notion that neuronal frequency selectivity is functionally relevant to frequency discrimination. Here we report psychophysical and anatomical evidence in favor of the hypothesis that fine-grained frequency resolution at the perceptual level relies on neuronal frequency selectivity in auditory cortex. An adaptive procedure was used to measure difference thresholds for pure tone frequency discrimination in five humans with focal brain lesions and eight normal controls. Only the patient with bilateral lesions of primary auditory cortex and surrounding areas showed markedly elevated frequency difference thresholds: Weber fractions for frequency direction discrimination (“higher”—“lower” pitch judgments) were about eightfold higher than Weber fractions measured in patients with unilateral lesions of auditory cortex, auditory midbrain, or dorsolateral frontal cortex; Weber fractions for frequency change discrimination (“same”—“different” pitch judgments) were about seven times higher. In contrast, pure-tone detection thresholds, difference thresholds for pure tone duration discrimination centered at 500 ms, difference thresholds for vibrotactile intensity discrimination, and judgments of visual line orientation were within normal limits or only mildly impaired following bilateral auditory cortex lesions. In light of current knowledge about the physiology and anatomy of primate auditory cortex and a review of previous lesion studies, we interpret the present results as evidence that fine-grained frequency processing at the perceptual level relies on the integrity of finely tuned neurons in auditory cortex.


1974 ◽  
Vol 17 (2) ◽  
pp. 194-202 ◽  
Author(s):  
Norman P. Erber

A recorded list of 25 spondaic words was administered monaurally through earphones to 72 hearing-impaired children to evaluate their comprehension of “easy” speech material. The subjects ranged in age from eight to 16 years, and their average pure-tone thresholds (500-1000-2000 Hz) ranged in level from 52 to 127 dB (ANSI, 1969). Most spondee-recognition scores either were high (70 to 100* correct) or low (0 to 30% correct). The degree of overlap in thresholds between the high-scoring and the low-scoring groups differed as a function of the method used to describe the audiogram. The pure-tone average of 500-1000-2000 Hz was a good, but not perfect, predictor of spondee-recognition ability. In general, children with average pure-tone thresholds better than about 85 dB HTL (ANSI, 1969) scored high, and those with thresholds poorer than about 100 dB scored low. Spondee-recognition scores, however, could not be predicted with accuracy for children whose audiograms fell between 85 and 100 dB HTL.


1993 ◽  
Vol 34 (3) ◽  
pp. 210-213 ◽  
Author(s):  
E. Andrew ◽  
T. Haider

The relative risk of adverse drug reactions of ionic versus non-ionic contrast media injected i.v. were compared for different types of trials using odds-ratio. The absolute and relative risk found in large post-marketing trials were compared with that found in the iohexol pre-registration trials. The absolute risks were 2 to 10 times higher in the pre-registration trials compared to the post-marketing surveillances. The relative risk for all adverse drug reactions was 3 to 6 times higher for ionic vs. non-ionic media and independent of pre- or post-registration studies. The odds-ratio seems to be a feasible method of comparing the relative risk of adverse reactions in various trials.


1985 ◽  
Vol 50 (4) ◽  
pp. 372-377 ◽  
Author(s):  
Patricia G. Stelmachowicz ◽  
Dawna E. Johnson ◽  
Lori L. Larson ◽  
Patrick E. Brookhouser

Changes in auditory threshold, psychophysical tuning curves, and speech perception (in both quiet and noise) were monitored over a 3-hr period following the ingestion of glycerol. All listeners had sensorineural hearing loss secondary to Menière's disease. Findings were characterized by large intersubject variability and in general did not show a clear relation between changes in threshold, frequency resolution, and speech perception.


1981 ◽  
Vol 69 (S1) ◽  
pp. S65-S65
Author(s):  
C. R. Mason ◽  
G. Kidd ◽  
L. L. Feth ◽  
M. A. Corban ◽  
C. A. Binnie ◽  
...  

1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


Sign in / Sign up

Export Citation Format

Share Document