Frequency resolution at equivalent sound‐pressure levels in normal‐hearing and hearing‐impaired listeners

1988 ◽  
Vol 83 (S1) ◽  
pp. S75-S75 ◽  
Author(s):  
David A. Nelson
2020 ◽  
Vol 63 (6) ◽  
pp. 2016-2026
Author(s):  
Tamara R. Almeida ◽  
Clayton H. Rocha ◽  
Camila M. Rabelo ◽  
Raquel F. Gomes ◽  
Ivone F. Neves-Lobo ◽  
...  

Purpose The aims of this study were to characterize hearing symptoms, habits, and sound pressure levels (SPLs) of personal audio system (PAS) used by young adults; estimate the risk of developing hearing loss and assess whether instructions given to users led to behavioral changes; and propose recommendations for PAS users. Method A cross-sectional study was performed in 50 subjects with normal hearing. Procedures included questionnaire and measurement of PAS SPLs (real ear and manikin) through the users' own headphones and devices while they listened to four songs. After 1 year, 30 subjects answered questions about their usage habits. For the statistical analysis, one-way analysis of variance, Tukey's post hoc test, Lin and Spearman coefficients, the chi-square test, and logistic regression were used. Results Most subjects listened to music every day, usually in noisy environments. Sixty percent of the subjects reported hearing symptoms after using a PAS. Substantial variability in the equivalent music listening level (Leq) was noted ( M = 84.7 dBA; min = 65.1 dBA, max = 97.5 dBA). A significant difference was found only in the 4-kHz band when comparing the real-ear and manikin techniques. Based on the Leq, 38% of the individuals exceeded the maximum daily time allowance. Comparison of the subjects according to the maximum allowed daily exposure time revealed a higher number of hearing complaints from people with greater exposure. After 1 year, 43% of the subjects reduced their usage time, and 70% reduced the volume. A volume not exceeding 80% was recommended, and at this volume, the maximum usage time should be 160 min. Conclusions The habit of listening to music at high intensities on a daily basis seems to cause hearing symptoms, even in individuals with normal hearing. The real-ear and manikin techniques produced similar results. Providing instructions on this topic combined with measuring PAS SPLs may be an appropriate strategy for raising the awareness of people who are at risk. Supplemental Material https://doi.org/10.23641/asha.12431435


1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


2005 ◽  
Vol 48 (4) ◽  
pp. 910-921 ◽  
Author(s):  
Laura E. Dreisbach ◽  
Marjorie R. Leek ◽  
Jennifer J. Lentz

The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.


1991 ◽  
Vol 34 (6) ◽  
pp. 1436-1438 ◽  
Author(s):  
Richard H. Wilson ◽  
John P. Preece ◽  
Courtney S. Crowther

The NU No. 6 materials spoken by a female speaker were passed through a notch filter centered at 247 Hz with a 34-dB depth The filtering reduced the amplitude range within the spectrum of the materials by 10 dB that was reflected as a 7.5-vu reduction measured on a true vu meter. Thus, the notch filtering in effect changed the level calibration of the materials. Psychometric functions of the NU No. 6 materials filtered and unfiltered in 60-dB SPL broadband noise were obtained from 12 listeners with normal hearing. Although the slopes of the functions for the two conditions were the same, the functions were displaced by an average of 5 8 dB with the function for the filtered materials located at the lower sound-pressure levels.


1992 ◽  
Vol 35 (2) ◽  
pp. 436-442 ◽  
Author(s):  
John P. Madden ◽  
Lawrence L. Feth

This study compares the temporal resolution of frequency-modulated sinusoids by normal-hearing and hearing-impaired subjects in a discrimination task. One signal increased linearly by 200 Hz in 50 msec. The other was identical except that its trajectory followed a series of discrete steps. Center frequencies were 500, 1000, 2000, and 4000 Hz. As the number of steps was increased, the duration of the individual steps decreased, and the subjects’ discrimination performance monotonically decreased to chance. It was hypothesized that the listeners could not temporally resolve the trajectory of the step signals at short step durations. At equal sensation levels, and at equal sound pressure levels, temporal resolution was significantly reduced for the impaired subjects. The difference between groups was smaller in the equal sound pressure level condition. Performance was much poorer at 4000 Hz than at the other test frequencies in all conditions because of poorer frequency discrimination at that frequency.


1976 ◽  
Vol 19 (1) ◽  
pp. 48-54 ◽  
Author(s):  
Ronald L. Cohen ◽  
Robert W. Keith

This study attempted to determine whether word-recognition scores obtained in noise were more sensitive to the presence of a hearing loss than recognition scores obtained in quiet. Subjects with normal hearing, high-frequency cochlear hearing loss, and flat cochlear hearing loss were tested in quiet and in the presence of a 500-Hz low-pass noise. Two signal-to-noise conditions were employed, −4 dB and −12 dB. Words were presented at 40 dB SL in one experiment and at 96 dB SPL for normal-hearing subjects in a second experiment. The results indicated that, while the word-recognition scores of groups were similar in quiet, the more negative the signal-to-noise ratio, the greater the separation of group scores, with hearing-impaired subjects having poorer recognition scores than normal-hearing subjects. When the speech and noise were presented at high SPLs, however, the normal-hearing subjects had poorer word recognition than those with flat cochlear losses. The results are interpreted as indicating greater spread of masking in normal-hearing than hearing-impaired subjects at high sound pressure levels.


2017 ◽  
Vol 28 (05) ◽  
pp. 395-403 ◽  
Author(s):  
Megan L. A. Thomas ◽  
Denis Fitzpatrick ◽  
Ryan McCreery ◽  
Kristen L. Janky

Background: Cervical and ocular vestibular-evoked myogenic potentials (VEMPs) have become common clinical vestibular assessments. However, VEMP testing requires high intensity stimuli, raising concerns regarding safety with children, where sound pressure levels may be higher due to their smaller ear canal volumes. Purpose: The purpose of this study was to estimate the range of peak-to-peak equivalent sound pressure levels (peSPLs) in child and adult ears in response to high intensity stimuli (i.e., 100 dB normal hearing level [nHL]) commonly used for VEMP testing and make a determination of whether acoustic stimuli levels with VEMP testing are safe for use in children. Research Design: Prospective experimental. Study Sample: Ten children (4–6 years) and ten young adults (24–35 years) with normal hearing sensitivity and middle ear function participated in the study. Data Collection and Analysis: Probe microphone peSPL measurements of clicks and 500 Hz tonebursts (TBs) were recorded in tubes of small, medium, and large diameter, and in a Brüel & Kjær Ear Simulator Type 4157 to assess for linearity of the stimulus at high levels. The different diameter tubes were used to approximate the range of cross-sectional areas in infant, child, and adult ears, respectively. Equivalent ear canal volume and peSPL measurements were then recorded in child and adult ears. Lower intensity levels were used in the participant’s ears to limit exposure to high intensity sound. The peSPL measurements in participant ears were extrapolated using predictions from linear mixed models to determine if equivalent ear canal volume significantly contributed to overall peSPL and to estimate the mean and 95% confidence intervals of peSPLs in child and adult ears when high intensity stimulus levels (100 dB nHL) are used for VEMP testing without exposing subjects to high-intensity stimuli. Results: Measurements from the coupler and tubes suggested: 1) each stimuli was linear, 2) there were no distortions or nonlinearities at high levels, and 3) peSPL increased with decreased tube diameter. Measurements in participant ears suggested: 1) peSPL was approximately 3 dB larger in child compared to adult ears, and 2) peSPL was larger in response to clicks compared to 500 Hz TBs. The model predicted the following 95% confidence interval for a 100 dB nHL click: 127–136.5 dB peSPL in adult ears and 128.7–138.2 dB peSPL in child ears. The model predicted the following 95% confidence interval for a 100 dB nHL 500 Hz TB stimulus: 122.2–128.2 dB peSPL in adult ears and 124.8–130.8 dB peSPL in child ears. Conclusions: Our findings suggest that 1) when completing VEMP testing, the stimulus is approximately 3 dB higher in a child’s ear, 2) a 500 Hz TB is recommended over a click as it has lower peSPL compared to the click, and 3) both duration and intensity should be considered when choosing VEMP stimuli. Calculating the total sound energy exposure for your chosen stimuli is recommended as it accounts for both duration and intensity. When using this calculation for children, consider adding 3 dB to the stimulus level.


1991 ◽  
Vol 34 (6) ◽  
pp. 1233-1249 ◽  
Author(s):  
David A. Nelson

Forward-masked psychophysical tuning curves (PTCs) were obtained for 1000-Hz probe tones at multiple probe levels from one ear of 26 normal-hearing listeners and from 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Comparisons between normal-hearing and hearing-impaired PTCs were made at equivalent masker levels near the tips of PTCs. Comparisons were also made of PTC characteristics obtained by fitting each PTC with three straight-line segments using least-squares fitting procedures. Abnormal frequency resolution was revealed only as abnormal downward spread of masking. The low-frequency slopes of PTCs from hearing-impaired listeners were not different from those of normal-hearing listeners. That is, hearing-impaired listeners did not demonstrate abnormal upward spread of masking when equivalent masker levels were compared. Ten hearing-impaired ears demonstrated abnormally broad PTCs, due exclusively to reduced high-frequency slopes in their PTCs. This abnormal downward spread of masking was observed only in listeners with hearing losses greater than 40 dB HL. From these results, it would appear that some, but not all, cochlear hearing losses greater than 40dB HL influence the sharp tuning capabilities usually associated with outer hair cell function.


1984 ◽  
Vol 27 (1) ◽  
pp. 145-154 ◽  
Author(s):  
Joseph W. Hall ◽  
Richard S. Tyler ◽  
Mariano A. Fernandes

The masking level difference (MLD) at 500 Hz was examined in wide-band (960 Hz) and narrow-band (50 Hz) noise for normal-hearing subjects and subjects with symmetrical mild-to-moderate cochlear hearing loss. Monaural tasks of intensity discrimination, temporal resolution, and frequency resolution were performed in order to examine relationships between monaural dysfunction and MLD performance. Interaural time discrimination for a 500-Hz pure tone also was examined. The performance of the hearing-impaired subjects was poorer than that of the normal-hearing subjects for MLD, interaural Δt, and most monaural tasks. However, no significant relationships were found between monaural and MLD performance when effects of threshold were taken into account. MLDs were more reduced in wide-band noise than in narrow-band noise for the hearing-impaired subjects (when Contrasted with normal-hearing subjects). MLD performance was correlated with interaural time discrimination, and it is suggested that one reason for poor MLD performance with hearing impairment may be poor temporal coding of stimulus-fine structure.


Sign in / Sign up

Export Citation Format

Share Document