Effects of aging on sound localization and speech understanding in realistic listening situations

1999 ◽  
Vol 105 (2) ◽  
pp. 1151-1151
Author(s):  
Janet Koehnke ◽  
Joan Besing
2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Tom Gawliczek ◽  
Wilhelm Wimmer ◽  
Fabio Munzinger ◽  
Marco Caversaccio ◽  
Martin Kompis

Objective. To measure the audiological benefit of the Baha SoundArc, a recently introduced nonimplantable wearing option for bone conduction sound processor, and to compare it with the known softband wearing option in subjects with normal cochlear function and a purely conductive bilateral hearing loss.Methods. Both ears of 15 normal hearing subjects were occluded for the time of the measurement, yielding an average unaided threshold of 49 dB HL (0.5 – 4 kHz). Soundfield thresholds, speech understanding in quiet and in noise, and sound localization were measured in unaided conditions and with 1 or 2 Baha 5 sound processors mounted on either a softband or a SoundArc device.Results. Soundfield thresholds and speech reception thresholds were improved by 19.5 to 24.8 dB (p<.001), when compared to the unaided condition. Speech reception thresholds in noise were improved by 3.7 to 4.7 dB (p<.001). Using 2 sound processors rather than one improved speech understanding in noise for speech from the direction of the2nddevice and sound localization error by 23° to 28°. No statistically significant difference was found between the SoundArc and the softband wearing options in any of the tests.Conclusions. Bone conduction sound processor mounted on a SoundArc or on a softband resulted in considerable improvements in hearing and speech understanding in subjects with a simulated, purely conductive, and bilateral hearing loss. No significant difference between the 2 wearing options was found. Using 2 sound processors improves sound localization and speech understanding in noise in certain spatial settings.


2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Annemarie Ludwig ◽  
Sylvia Meuret ◽  
Rolf-Dieter Battmer ◽  
Marc Schönwiesner ◽  
Michael Fuchs ◽  
...  

Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations.


Author(s):  
Pedro Luiz Mangabeira-Albernaz ◽  
Andrea Felice dos Santos Malerbi

Abstract Introduction Cochlear implants have been proposed for cases of unilateral hearing loss, especially in patients with tinnitus impairment. Several studies have shown that they result in definite improvement of sound localization and speech understanding, both in quiet and noisy environments. On the other hand, there are few references regarding cochlear implants in patients whose better ears present hearing loss. Objective To report the audiological outcomes of three patients with unilateral deafness, in whom the better ears presented hearing losses, submitted to cochlear implants. Methods Three patients with unilateral profound hearing loss underwent a cochlear implant performed by the same surgeon. Results The patients' data are presented in detail. Conclusion The indications for cochlear implants are becoming more diverse with the expansion of clinical experience and the observation that they definitely help patients with special hearing problems.


1990 ◽  
Vol 33 (4) ◽  
pp. 654-659 ◽  
Author(s):  
Jerry L. Cranford ◽  
Martha Boose ◽  
Christopher A. Moore

The precedence effect in sound localization can be evoked by presenting identical sounds (e.g., clicks) from pairs of loudspeakers placed on opposite sides of a subject’s head. With appropriate inter-loudspeaker delays, normal subjects perceive a fused image originating from the side of the leading loudspeaker. Separate tests at loudspeaker delays ranging from 0 to 8 ms were presented to groups of young and elderly subjects. At 0 ms delay, young subjects perceived the fused image to be located halfway between the loudspeakers; at progressively longer delays, the image was perceived closer to the leading loudspeaker. Significant numbers of elderly subjects exhibited discrimination difficulties with delays below 0.7 ms.


2014 ◽  
Vol 25 (07) ◽  
pp. 656-665 ◽  
Author(s):  
Su-Hyun Jin ◽  
Chang Liu ◽  
Douglas P. Sladen

Background: Speech understanding in noise is comparatively more problematic for older listeners with and without hearing loss, and age-related changes in temporal resolution might be associated with reduced speech recognition in complex noise. Purpose: The purpose of this study was to investigate the effects of aging on temporal processing and speech perception in noise for normal-hearing (NH) and cochlear-implant (CI) listeners. Research Design: All participants completed three experimental procedures: (1) amplitude modulation (AM) detection thresholds, (2) sentence recognition in quiet, and (3) speech recognition in steady or modulating noise. Study Sample: Four listener groups participated in the study: 11 younger (≤ 30 yr old, YNH) listeners and 12 older (> 60 yr old, ONH) listeners with NH and 7 younger (< 55 yr old, YCI) and 6 older (> 60 yr old, OCI) CI users. CI listeners have been wearing their device either monaurally or binaurally at least 1 yr. Data collection and Analysis: For speech recognition testing, there were eight listening conditions in noise (4 modulation frequencies × 2 signal-to-noise ratios) and one in quiet for each listener. For modulation detection testing, a broadband noise with a duration of 500 msec served as the stimuli at three temporal modulation frequencies of 2, 4, and 8 Hz, which were used to modulate the noise in the speech recognition experiment. We measured AM detection thresholds using a two-interval, two-alternative, forced-choice adaptive procedure. We conducted a series of analysis of variance tests to examine the effect of aging on each test result and measured the correlation coefficient between speech recognition in noise and modulation detection thresholds. Results: Although older NH and CI listeners performed similar to the younger listeners with the same hearing status for sentence recognition in quiet, there was a significant aging effect on speech recognition in noise. Regardless of modulation frequency and signal-to-noise ratio, speech recognition scores of the older listeners were poorer than those of the younger listeners when hearing status was matched. We also found a significant effect of aging on AM detection at each modulating frequency and a strong correlation between speech recognition in modulating noise and AM detection thresholds at 2 and 4 Hz. Conclusions: Regardless of differences in hearing status, the degree and pattern of aging effect on auditory processing of the NH listener groups were similar to those of the CI listener groups. This result suggests that age-related declines in speech understanding are likely multifactorial, including peripheral and central factors. Although the age cutoff of the current older age group was 10 yr less than in previous studies (Dubno et al, 2002; Lin et al, 2011), we still found the age-related differences on two auditory tasks. This study extends the knowledge of age-related auditory perception difficulties to CI listeners.


Author(s):  
K. Cullen-Dockstader ◽  
E. Fifkova

Normal aging results in a pronounced spatial memory deficit associated with a rapid decay of long-term potentiation at the synapses between the perforant path and spines in the medial and distal thirds of the dentate molecular layer (DML), suggesting the alteration of synaptic transmission in the dentate fascia. While the number of dentate granule cells remains unchanged, and there are no obvious pathological changes in these cells associated with increasing age, the density of their axospinous contacts has been shown to decrease. There are indications that the presynaptic element is affected by senescence before the postsynaptic element, yet little attention has been given to the fine structure of the remaining axon terminals. Therefore, we studied the axon terminals of the perforant path in the DML across three age groups.5 Male rats (Fischer 344) of each age group (3, 24 and 30 months), were perfused through the aorta.


2020 ◽  
Vol 29 (3) ◽  
pp. 391-403
Author(s):  
Dania Rishiq ◽  
Ashley Harkrider ◽  
Cary Springer ◽  
Mark Hedrick

Purpose The main purpose of this study was to evaluate aging effects on the predominantly subcortical (brainstem) encoding of the second-formant frequency transition, an essential acoustic cue for perceiving place of articulation. Method Synthetic consonant–vowel syllables varying in second-formant onset frequency (i.e., /ba/, /da/, and /ga/ stimuli) were used to elicit speech-evoked auditory brainstem responses (speech-ABRs) in 16 young adults ( M age = 21 years) and 11 older adults ( M age = 59 years). Repeated-measures mixed-model analyses of variance were performed on the latencies and amplitudes of the speech-ABR peaks. Fixed factors were phoneme (repeated measures on three levels: /b/ vs. /d/ vs. /g/) and age (two levels: young vs. older). Results Speech-ABR differences were observed between the two groups (young vs. older adults). Specifically, older listeners showed generalized amplitude reductions for onset and major peaks. Significant Phoneme × Group interactions were not observed. Conclusions Results showed aging effects in speech-ABR amplitudes that may reflect diminished subcortical encoding of consonants in older listeners. These aging effects were not phoneme dependent as observed using the statistical methods of this study.


Sign in / Sign up

Export Citation Format

Share Document