Effect of Age on Prediction of Hearing Loss with the Bivariate-Plotting Procedure

1984 ◽  
Vol 27 (1) ◽  
pp. 12-19 ◽  
Author(s):  
Shlomo Silman ◽  
Carol Ann Silverman ◽  
Theresa Showers ◽  
Stanley A. Gelfand

The effect of age on accuracy of prediction of hearing impairment with the bivariate-plotting procedure was investigated in 72 normal-hearing subjects aged 20–69 years and in 86 sensorineural hearing-impaired subjects aged 20–83 years. The predictive accuracy with the bivariate-plotting procedure improved markedly when the data from subjects over 44 years of age were excluded from the bivariate plot. The predictive accuracy improved further when the construction of the line segments in the traditional bivariate plot was modified.

2018 ◽  
Author(s):  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractElevated hearing thresholds in hearing impaired adults are usually compensated by providing amplification through a hearing aid. In spite of restoring hearing sensitivity, difficulties with understanding speech in noisy environments often remain. One main reason is that sensorineural hearing loss not only causes loss of audibility but also other deficits, including peripheral distortion but also central temporal processing deficits. To investigate the neural consequences of hearing impairment in the brain underlying speech-in-noise difficulties, we compared EEG responses to natural speech of 14 hearing impaired adults with those of 14 age-matched normal-hearing adults. We measured neural envelope tracking to sentences and a story masked by different levels of a stationary noise or competing talker. Despite their sensorineural hearing loss, hearing impaired adults showed higher neural envelope tracking of the target than the competing talker, similar to their normal-hearing peers. Furthermore, hearing impairment was related to an additional increase in neural envelope tracking of the target talker, suggesting that hearing impaired adults may have an enhanced sensitivity to envelope modulations or require a larger differential tracking of target versus competing talker to neurally segregate speech from noise. Lastly, both normal-hearing and hearing impaired participants showed an increase in neural envelope tracking with increasing speech understanding. Hence, our results open avenues towards new clinical applications, such as neuro-steered prostheses as well as objective and automatic measurements of speech understanding performance.HighlightsAdults with hearing impairment can neurally segregate speech from background noiseHearing loss is related to enhanced neural envelope tracking of the target talkerNeural envelope tracking has potential to objectively measure speech understanding


2019 ◽  
Author(s):  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractElevated hearing thresholds in hearing impaired adults are usually compensated by providing amplification through a hearing aid. In spite of restoring hearing sensitivity, difficulties with understanding speech in noisy environments often remain. One main reason is that sensorineural hearing loss not only causes loss of audibility but also other deficits, including peripheral distortion but also central temporal processing deficits. To investigate the neural consequences of hearing impairment in the brain underlying speech-in-noise difficulties, we compared EEG responses to natural speech of 14 hearing impaired adults with those of 14 age-matched normal-hearing adults. We measured neural envelope tracking to sentences and a story masked by different levels of a stationary noise or competing talker. Despite their sensorineural hearing loss, hearing impaired adults showed higher neural envelope tracking of the target than the competing talker, similar to their normal-hearing peers. Furthermore, hearing impairment was related to an additional increase in neural envelope tracking of the target talker, suggesting that hearing impaired adults may have an enhanced sensitivity to envelope modulations or require a larger differential tracking of target versus competing talker to neurally segregate speech from noise. Lastly, both normal-hearing and hearing impaired participants showed an increase in neural envelope tracking with increasing speech understanding. Hence, our results open avenues towards new clinical applications, such as neuro-steered prostheses as well as objective and automatic measurements of speech understanding performance.HighlightsAdults with hearing impairment can neurally segregate speech from background noiseHearing loss is related to enhanced neural envelope tracking of the target talkerNeural envelope tracking has potential to objectively measure speech understanding


1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


Life ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 360
Author(s):  
Jan Boeckhaus ◽  
Nicola Strenzke ◽  
Celine Storz ◽  
Oliver Gross ◽  
◽  
...  

Most adults with Alport syndrome (AS) suffer from progressive sensorineural hearing loss. However, little is known about the early characteristics of hearing loss in children with AS. As a part of the EARLY PRO-TECT Alport trial, this study was the first clinical trial ever to investigate hearing loss in children with AS over a timespan of up to six years Nine of 51 children (18%) had hearing impairment. Audiograms were divided into three age groups: in the 5–9-year-olds, the 4-pure tone average (4PTA) was 8.9 decibel (dB) (n = 15) in those with normal hearing and 43.8 dB (n = 2, 12%) in those with hearing impairment. Among the 10–13-year-olds, 4PTA was 4.8 dB (healthy, n = 12) and 41.4 dB (hearing impaired, n = 6.33%). For the 14–20-year-olds, the 4PTA was 7.0 dB (healthy; n = 9) and 48.2 dB (hearing impaired, n = 3.25%). On average, hearing thresholds of the hearing impaired group increased, especially at frequencies between 1–3 kHz. In conclusion, 18% of children developed hearing loss, with a maximum hearing loss in the audiograms at 1–3 kHz. The percentage of children with hearing impairment increased from 10% at baseline to 18% at end of trial as did the severity of hearing loss.


1981 ◽  
Vol 24 (1) ◽  
pp. 108-112 ◽  
Author(s):  
P. M. Zurek ◽  
C. Formby

Thresholds for frequency modulation were measured by an adaptive, two-alternative, forced-choice method with ten listeners: eight who showed varying degrees of sensorineural hearing impairment, and two with normal-hearing sensitivity. Results for test frequencies spaced at octave intervals between 125 and 4000 Hz showed that, relative to normal-hearing listeners, the ability of the hearing-impaired listeners to detect a sinusoidal frequency modulation: (1) is diminished above a certain level of hearing loss; and (2) is more disrupted for low-frequency tones than for high-frequency tones, given the same degree of hearing loss at the test frequency. The first finding is consistent with that of previous studies which show a general deterioration of frequency-discrimination ability associated with moderate, or worse, hearing loss. It is proposed that the second finding may be explained: 1) by differential impairment of the temporal and place mechanisms presumed to, encode pitch at the lower and higher frequencies, respectively; and/or, 2) for certain configurations of hearing loss, by the asymmetrical pattern of cochlear excitation that may lead to the underestimation, from measurements of threshold sensitivity, of hearing impairment for low-frequency tones and consequently to relatively large changes in frequency discrimination for small shifts in hearing threshold.


1980 ◽  
Vol 45 (3) ◽  
pp. 401-407 ◽  
Author(s):  
Daniel J. Orchik ◽  
Norma Roddy

The Synthetic Sentence Identification (SSI) and the Northwestern University Auditory Test No. 6 (NU6) were compared in a hearing aid evaluation procedure using normal-hearing listeners and subjects with sensorineural hearing loss. Listener performance was assessed at three message-to-competition ratios (MCR) employing the same competing message. Aided benefit and residual deficit were evaluated for both measures and, in general, the results obtained with the NU6 indicated greater aided benefit as well as greater residual deficit than the SSI for these hearing-impaired subjects. The results are discussed in terms of the implications for clinical hearing aid evaluations.


2004 ◽  
Vol 15 (03) ◽  
pp. 216-225 ◽  
Author(s):  
Ruth A. Bentler ◽  
Catherine Palmer ◽  
Andrew B. Dittberner

In this study, the performance of 48 listeners with normal hearing was compared to the performance of 46 listeners with documented hearing loss. Various conditions of directional and omnidirectional hearing aid use were studied. The results indicated that when the noise around a listener was stationary, a first- or second-order directional microphone allowed a group of hearing-impaired listeners with mild-to-moderate, bilateral, sensorineural hearing loss to perform similarly to normal hearing listeners on a speech-in-noise task (i.e., they required the same signal-to-noise ratio to achieve 50% understanding). When the noise source was moving around the listener, only the second-order (three-microphone) system set to an adaptive directional response (where the polar pattern changes due to the change in noise location) allowed a group of hearing-impaired individuals with mild-to-moderate sensorineural hearing loss to perform similarly to young, normal-hearing individuals.


Author(s):  
Gregory M. Ellis ◽  
Pamela E. Souza

Abstract Background Clinics are increasingly turning toward using virtual environments to demonstrate and validate hearing aid fittings in “realistic” listening situations before the patient leaves the clinic. One of the most cost-effective and straightforward ways to create such an environment is through the use of a small speaker array and amplitude panning. Amplitude panning is a signal playback method used to change the perceived location of a source by changing the level of two or more loudspeakers. The perceptual consequences (i.e., perceived source width and location) of amplitude panning have been well-documented for listeners with normal hearing but not for listeners with hearing impairment. Purpose The purpose of this study was to examine the perceptual consequences of amplitude panning for listeners with hearing statuses from normal hearing through moderate sensorineural hearing losses. Research Design Listeners performed a localization task. Sound sources were broadband 4 Hz amplitude-modulated white noise bursts. Thirty-nine sources (14 physical) were produced by either physical loudspeakers or via amplitude panning. Listeners completed a training block of 39 trials (one for each source) before completing three test blocks of 39 trials each. Source production method was randomized within block. Study Sample Twenty-seven adult listeners (mean age 52.79, standard deviation 27.36, 10 males, 17 females) with hearing ranging from within normal limits to moderate bilateral sensorineural hearing loss participated in the study. Listeners were recruited from a laboratory database of listeners that consented to being informed about available studies. Data Collection and Analysis Listeners indicated the perceived source location via touch screen. Outcome variables were azimuth error, elevation error, and total angular error (Euclidean distance in degrees between perceived and correct location). Listeners' pure-tone averages (PTAs) were calculated and used in mixed-effects models along with source type and the interaction between source type and PTA as predictors. Subject was included as a random variable. Results Significant interactions between PTA and source production method were observed for total and elevation errors. Listeners with higher PTAs (i.e., worse hearing) did not localize physical and panned sources differently whereas listeners with lower PTAs (i.e., better hearing) did. No interaction was observed for azimuth errors; however, there was a significant main effect of PTA. Conclusion As hearing impairment becomes more severe, listeners localize physical and panned sources with similar errors. Because physical and panned sources are not localized differently by adults with hearing loss, amplitude panning could be an appropriate method for constructing virtual environments for these listeners.


1990 ◽  
Vol 33 (4) ◽  
pp. 726-735 ◽  
Author(s):  
Larry E. Humes ◽  
Lisa Roberts

The role that sensorineural hearing loss plays in the speech-recognition difficulties of the hearing-impaired elderly is examined. One approach to this issue was to make between-group comparisons of performance for three groups of subjects: (a) young normal-hearing adults; (b) elderly hearing-impaired adults; and (c) young normal-hearing adults with simulated sensorineural hearing loss equivalent to that of the elderly subjects produced by a spectrally shaped masking noise. Another approach to this issue employed correlational analyses to examine the relation between audibility and speech recognition within the group of elderly hearing-impaired subjects. An additional approach was pursued in which an acoustical index incorporating adjustments for threshold elevation was used to examine the role audibility played in the speech-recognition performance of the hearing-impaired elderly. A wide range of listening conditions was sampled in this experiment. The conclusion was that the primary determiner of speech-recognition performance in the elderly hearing-impaired subjects was their threshold elevation.


2021 ◽  
Vol 25 ◽  
pp. 233121652110656
Author(s):  
Oliver Scheuregger ◽  
Jens Hjortkjær ◽  
Torsten Dau

Sound textures are a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that sound texture perception is mediated by time-averaged summary statistics measured from early stages of the auditory system. The ability of young normal-hearing (NH) listeners to identify synthetic sound textures increases as the statistics of the synthetic texture approach those of its real-world counterpart. In sound texture discrimination, young NH listeners utilize the fine temporal stimulus information for short-duration stimuli, whereas they switch to a time-averaged statistical representation as the stimulus’ duration increases. The present study investigated how younger and older listeners with a sensorineural hearing impairment perform in the corresponding texture identification and discrimination tasks in which the stimuli were amplified to compensate for the individual listeners’ loss of audibility. In both hearing impaired (HI) listeners and NH controls, sound texture identification performance increased as the number of statistics imposed during the synthesis stage increased, but hearing impairment was accompanied by a significant reduction in overall identification accuracy. Sound texture discrimination performance was measured across listener groups categorized by age and hearing loss. Sound texture discrimination performance was unaffected by hearing loss at all excerpt durations. The older listeners’ sound texture and exemplar discrimination performance decreased for signals of short excerpt duration, with older HI listeners performing better than older NH listeners. The results suggest that the time-averaged statistic representations of sound textures provide listeners with cues which are robust to the effects of age and sensorineural hearing loss.


Sign in / Sign up

Export Citation Format

Share Document