Effects of aging on voice emotion recognition in cochlear implant users and normally hearing adults listening to spectrally degraded speech

2019 ◽  
Vol 145 (3) ◽  
pp. 1820-1820
Author(s):  
Shauntelle Cannon ◽  
Monita Chatterjee
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Diana S. Cortes ◽  
Christina Tornberg ◽  
Tanja Bänziger ◽  
Hillary Anger Elfenbein ◽  
Håkan Fischer ◽  
...  

AbstractAge-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.


Author(s):  
Md Jahangir Alam ◽  
Yazid Attabi ◽  
Patrick Kenny ◽  
Pierre Dumouchel ◽  
Douglas O’Shaughnessy

2016 ◽  
Vol 87 ◽  
pp. 219-232 ◽  
Author(s):  
Patrizia Mancini ◽  
Ilaria Giallini ◽  
Luca Prosperini ◽  
Hilal Dincer D'alessandro ◽  
Letizia Guerzoni ◽  
...  

2020 ◽  
Vol 63 (6) ◽  
pp. 1712-1725
Author(s):  
Xin Luo ◽  
Courtney Kolberg ◽  
Kathryn R. Pulling ◽  
Tamiko Azuma

Purpose This study aimed to evaluate the effects of aging and cochlear implant (CI) on psychoacoustic and speech recognition abilities and to assess the relative contributions of psychoacoustic and demographic factors to speech recognition of older CI (OCI) users. Method Twelve OCI users, 12 older acoustic-hearing (OAH) listeners age-matched to OCI users, and 12 younger normal-hearing (YNH) listeners underwent tests of temporal amplitude modulation detection, temporal gap detection in noise, and spectral–temporal modulated ripple discrimination. Speech reception thresholds were measured for sentence recognition in multitalker, speech-babble noise. Results Statistical analyses showed that, for the small sample of OAH listeners, the degree of hearing loss did not significantly affect any outcome measure. Temporal resolution, spectral resolution, and speech recognition all significantly degraded with both age and the use of a CI (i.e., YNH better than OAH and OAH better than OCI performance). Although both were significantly correlated with OCI users' speech recognition, the duration of CI use no longer had a significant effect on speech recognition once the effect of spectral–temporal ripple discrimination performance was taken into account. For OAH listeners, the only significant predictor of speech recognition was temporal gap detection performance. Conclusion The preliminary results suggest that speech recognition of OCI users may improve with longer duration of CI use, mainly due to higher perceptual acuity to spectral–temporal modulated ripples in acoustic stimuli.


Author(s):  
Yung‐Song Lin ◽  
Che‐Ming Wu ◽  
Charles J. Limb ◽  
Hui‐Ping Lu ◽  
I. Jung Feng ◽  
...  

Author(s):  
Qingqing Meng ◽  
Yiwen Li Hegner ◽  
Iain Giblin ◽  
Catherine McMahon ◽  
Blake W Johnson

AbstractProviding a plausible neural substrate of speech processing and language comprehension, cortical activity has been shown to track different levels of linguistic structure in connected speech (syllables, phrases and sentences), independent of the physical regularities of the acoustic stimulus. In the current study, we investigated the effect of speech intelligibility on this brain activity as well as the underlying neural sources. Using magnetoencephalography (MEG), brain responses to natural speech and noise-vocoded (spectrally-degraded) speech in nineteen normal hearing participants were measured. Results showed that cortical MEG coherence to linguistic structure changed parametrically with the intelligibility of the speech signal. Cortical responses coherent with phrase and sentence structures were lefthemisphere lateralized, whereas responses coherent to syllable/word structure were bilateral. The enhancement of coherence to intelligible compared to unintelligible speech was also left lateralized and localized to the parasylvian cortex. These results demonstrate that cortical responses to higher level linguistics structures (phrase and sentence level) are sensitive to speech intelligibility. Since the noise-vocoded sentences simulate the auditory input provided by a cochlear implant, such objective neurophysiological measures have potential clinical utility for assessment of cochlear implant performance.


2017 ◽  
Vol 141 (5) ◽  
pp. 3816-3816
Author(s):  
Zhi Zhu ◽  
Ryota Miyauchi ◽  
Yukiko Araki ◽  
Masashi Unoki

Sign in / Sign up

Export Citation Format

Share Document