Effects of temporal smearing on temporal resolution, frequency selectivity, and speech intelligibility

1994 ◽  
Vol 96 (3) ◽  
pp. 1325-1340 ◽  
Author(s):  
Zezhang Hou ◽  
Chaslav V. Pavlovic
1985 ◽  
Vol 28 (2) ◽  
pp. 197-206 ◽  
Author(s):  
Jill Preminger ◽  
Terry L. Wiley

The relations between frequency selectivity and consonant intelligibility were investigated in subjects with sensorineura] hearing loss in an attempt to derive predictive indices. Three matched pairs of subjects with similar audiometric configurations (high-frequency, fiat or low-frequency hearing loss) but significantly different word-intelligibility scores were tested. Characteristics of psychophysical tuning curves (PTCs) for high- and low-frequency probes were compared with speech-intelligibility performance for high- and low-frequency consonant-vowel syllables. Frequency-specific relations between PTC characteristics and consonant-intelligibility performance were observed in the subject pairs with high-frequency and fiat sensorineural hearing loss. Corresponding results for the subject pair with low-frequency sensorineural hearing loss were equivocal.


2013 ◽  
Vol 280 (1751) ◽  
pp. 20122296 ◽  
Author(s):  
Megan D. Gall ◽  
Therese S. Salameh ◽  
Jeffrey R. Lucas

Many species of songbirds exhibit dramatic seasonal variation in song output. Recent evidence suggests that seasonal changes in auditory processing are coincident with seasonal variation in vocal output. Here, we show, for the first time, that frequency selectivity and temporal resolution of the songbird auditory periphery change seasonally and in a sex-specific manner. Male and female house sparrows ( Passer domesticus ) did not differ in their frequency sensitivity during the non-breeding season, nor did they differ in their temporal resolution. By contrast, female house sparrows showed enhanced frequency selectivity during the breeding season, which was matched by a concomitant reduction of temporal resolution. However, males failed to show seasonal plasticity in either of these auditory properties. We discuss potential mechanisms generating these seasonal patterns and the implications of sex-specific seasonal changes in auditory processing for vocal communication.


1992 ◽  
Vol 91 (1) ◽  
pp. 293-305 ◽  
Author(s):  
C. Formby ◽  
L. N. Morgan ◽  
T. G. Forrest ◽  
J. J. Raney

2013 ◽  
Vol 24 (04) ◽  
pp. 307-328 ◽  
Author(s):  
Joshua G.W. Bernstein ◽  
Van Summers ◽  
Elena Grassi ◽  
Ken W. Grant

Background: Hearing-impaired (HI) individuals with similar ages and audiograms often demonstrate substantial differences in speech-reception performance in noise. Traditional models of speech intelligibility focus primarily on average performance for a given audiogram, failing to account for differences between listeners with similar audiograms. Improved prediction accuracy might be achieved by simulating differences in the distortion that speech may undergo when processed through an impaired ear. Although some attempts to model particular suprathreshold distortions can explain general speech-reception deficits not accounted for by audibility limitations, little has been done to model suprathreshold distortion and predict speech-reception performance for individual HI listeners. Auditory-processing models incorporating individualized measures of auditory distortion, along with audiometric thresholds, could provide a more complete understanding of speech-reception deficits by HI individuals. A computational model capable of predicting individual differences in speech-recognition performance would be a valuable tool in the development and evaluation of hearing-aid signal-processing algorithms for enhancing speech intelligibility. Purpose: This study investigated whether biologically inspired models simulating peripheral auditory processing for individual HI listeners produce more accurate predictions of speech-recognition performance than audiogram-based models. Research Design: Psychophysical data on spectral and temporal acuity were incorporated into individualized auditory-processing models consisting of three stages: a peripheral stage, customized to reflect individual audiograms and spectral and temporal acuity; a cortical stage, which extracts spectral and temporal modulations relevant to speech; and an evaluation stage, which predicts speech-recognition performance by comparing the modulation content of clean and noisy speech. To investigate the impact of different aspects of peripheral processing on speech predictions, individualized details (absolute thresholds, frequency selectivity, spectrotemporal modulation [STM] sensitivity, compression) were incorporated progressively, culminating in a model simulating level-dependent spectral resolution and dynamic-range compression. Study Sample: Psychophysical and speech-reception data from 11 HI and six normal-hearing listeners were used to develop the models. Data Collection and Analysis: Eleven individualized HI models were constructed and validated against psychophysical measures of threshold, frequency resolution, compression, and STM sensitivity. Speech-intelligibility predictions were compared with measured performance in stationary speech-shaped noise at signal-to-noise ratios (SNRs) of −6, −3, 0, and 3 dB. Prediction accuracy for the individualized HI models was compared to the traditional audibility-based Speech Intelligibility Index (SII). Results: Models incorporating individualized measures of STM sensitivity yielded significantly more accurate within-SNR predictions than the SII. Additional individualized characteristics (frequency selectivity, compression) improved the predictions only marginally. A nonlinear model including individualized level-dependent cochlear-filter bandwidths, dynamic-range compression, and STM sensitivity predicted performance more accurately than the SII but was no more accurate than a simpler linear model. Predictions of speech-recognition performance simultaneously across SNRs and individuals were also significantly better for some of the auditory-processing models than for the SII. Conclusions: A computational model simulating individualized suprathreshold auditory-processing abilities produced more accurate speech-intelligibility predictions than the audibility-based SII. Most of this advantage was realized by a linear model incorporating audiometric and STM-sensitivity information. Although more consistent with known physiological aspects of auditory processing, modeling level-dependent changes in frequency selectivity and gain did not result in more accurate predictions of speech-reception performance.


2013 ◽  
Vol 24 (04) ◽  
pp. 293-306 ◽  
Author(s):  
Joshua G.W. Bernstein ◽  
Golbarg Mehraei ◽  
Shihab Shamma ◽  
Frederick J. Gallun ◽  
Sarah M. Theodoroff ◽  
...  

Background: A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose: The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design: The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 1000, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample: Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis: STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results: STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions: Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners.


Sign in / Sign up

Export Citation Format

Share Document