Vocal emotion identification by older listeners with hearing loss

2018 ◽  
Vol 144 (3) ◽  
pp. 1841-1841
Author(s):  
Huiwen Goy ◽  
Frank A. Russo
PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9118
Author(s):  
Sarah Griffiths ◽  
Shaun Kok Yew Goh ◽  
Courtenay Fraiser Norbury ◽  

The ability to accurately identify and label emotions in the self and others is crucial for successful social interactions and good mental health. In the current study we tested the longitudinal relationship between early language skills and recognition of facial and vocal emotion cues in a representative UK population cohort with diverse language and cognitive skills (N = 369), including a large sample of children that met criteria for Developmental Language Disorder (DLD, N = 97). Language skills, but not non-verbal cognitive ability, at age 5–6 predicted emotion recognition at age 10–12. Children that met the criteria for DLD showed a large deficit in recognition of facial and vocal emotion cues. The results highlight the importance of language in supporting identification of emotions from non-verbal cues. Impairments in emotion identification may be one mechanism by which language disorder in early childhood predisposes children to later adverse social and mental health outcomes.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261354
Author(s):  
Mattias Ekberg ◽  
Josefine Andin ◽  
Stefan Stenfelt ◽  
Örjan Dahlström

Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.


2021 ◽  
pp. 000348942110400
Author(s):  
Özlem Saatci ◽  
Hakan Geden ◽  
Halide Güneş Çiftçi ◽  
Zafer Çiftçi ◽  
Özge Arıcı Düz ◽  
...  

Objective: The main objective of this research was to evaluate the correlation between the severity of hearing loss and the facial emotional recognition as a critical part of social cognition in elderly patients. Methods: The prospective study was comprised of 85 individuals. The participants were divided into 3 groups. The first group consisted of 30 subjects older than 65 years with a bilateral pure-tone average mean >30 dB HL. The second group consisted of 30 subjects older than 65 years with a PTA mean ≤30 dB HL. The third group consisted of 25 healthy subjects with ages ranging between 18 and 45 years and a PTA mean ≤25 dB HL. A Facial Emotion Identification Test and a Facial Emotion Discrimination Test were administered to all groups. Results: Elderly subjects with hearing loss performed significantly worse than the other 2 groups on the facial emotion identification and discrimination tests ( P < .05). Appealingly, they identified a positive emotion, “happiness,” more accurately in comparison to the other negative emotions. Conclusions: Our results suggest that increased age might be associated with decreased facial emotion identification and discrimination scores, which could be deteriorated in the presence of significant hearing loss.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yuyang Wang ◽  
Lili Liu ◽  
Ying Zhang ◽  
Chaogang Wei ◽  
Tianyu Xin ◽  
...  

As elucidated by prior research, children with hearing loss have impaired vocal emotion recognition compared with their normal-hearing peers. Cochlear implants (CIs) have achieved significant success in facilitating hearing and speech abilities for people with severe-to-profound sensorineural hearing loss. However, due to the current limitations in neuroimaging tools, existing research has been unable to detail the neural processing for perception and the recognition of vocal emotions during early stage CI use in infant and toddler CI users (ITCI). In the present study, functional near-infrared spectroscopy (fNIRS) imaging was employed during preoperative and postoperative tests to describe the early neural processing of perception in prelingual deaf ITCIs and their recognition of four vocal emotions (fear, anger, happiness, and neutral). The results revealed that the cortical response elicited by vocal emotional stimulation on the left pre-motor and supplementary motor area (pre-SMA), right middle temporal gyrus (MTG), and right superior temporal gyrus (STG) were significantly different between preoperative and postoperative tests. These findings indicate differences between the preoperative and postoperative neural processing associated with vocal emotional stimulation. Further results revealed that the recognition of vocal emotional stimuli appeared in the right supramarginal gyrus (SMG) after CI implantation, and the response elicited by fear was significantly greater than the response elicited by anger, indicating a negative bias. These findings indicate that the development of emotional bias and the development of emotional perception and recognition capabilities in ITCIs occur on a different timeline and involve different neural processing from those in normal-hearing peers. To assess the speech perception and production abilities, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and Speech Intelligibility Rating (SIR) were used. The results revealed no significant differences between preoperative and postoperative tests. Finally, the correlates of the neurobehavioral results were investigated, and the results demonstrated that the preoperative response of the right SMG to anger stimuli was significantly and positively correlated with the evaluation of postoperative behavioral outcomes. And the postoperative response of the right SMG to anger stimuli was significantly and negatively correlated with the evaluation of postoperative behavioral outcomes.


2020 ◽  
Vol 148 (4) ◽  
pp. 2711-2711
Author(s):  
Stephanie Strong ◽  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Valeriy Shafiro

Author(s):  
G.J. Spector ◽  
C.D. Carr ◽  
I. Kaufman Arenberg ◽  
R.H. Maisel

All studies on primary neural degeneration in the cochlea have evaluated the end stages of degeneration or the indiscriminate destruction of both sensory cells and cochlear neurons. We have developed a model which selectively simulates the dystrophic changes denoting cochlear neural degeneration while sparing the cochlear hair cells. Such a model can be used to define more precisely the mechanism of presbycusis or the hearing loss in aging man.Twenty-two pigmented guinea pigs (200-250 gm) were perfused by the perilymphatic route as live preparations using fluorocitrate in various concentrations (15-250 ug/cc) and at different incubation times (5-150 minutes). The barium salt of DL fluorocitrate, (C6H4O7F)2Ba3, was reacted with 1.0N sulfuric acid to precipitate the barium as a sulfate. The perfusion medium was prepared, just prior to use, as follows: sodium phosphate buffer 0.2M, pH 7.4 = 9cc; fluorocitrate = 15-200 mg/cc; and sucrose = 0.2M.


1978 ◽  
Vol 9 (1) ◽  
pp. 24-28 ◽  
Author(s):  
Richard H. Nodar

The teachers of 2231 elementary school children were asked to identify those with known or suspected hearing problems. Following screening, the data were compared. Teachers identified 5% of the children as hearing-impaired, while screening identified only 3%. There was agreement between the two procedures on 1%. Subsequent to the teacher interviews, rescreening and tympanometry were conducted. These procedures indicated that teacher screening and tympanometry were in agreement on 2% of the total sample or 50% of the hearing-loss group. It was concluded that teachers could supplement audiometry, particularly when otoscopy and typanometry are not available.


1981 ◽  
Vol 12 (1) ◽  
pp. 26-35 ◽  
Author(s):  
Donald L. McCanna ◽  
Giacinto DeLapa

This report reviews 27 cases of children exhibiting functional hearing loss. The study reveals that most students were in the upper elementary grades and were predominantly females. These subjects were functioning below their ability level in school and were usually in conflict with school, home, or peers. Tests used were selected on the basis of their helping to provide early identification. The subjects' oral and behavioral responses are presented, as well as ways of resolving the hearing problem. Some helpful counseling techniques are also presented.


Sign in / Sign up

Export Citation Format

Share Document