vocal emotion
Recently Published Documents


TOTAL DOCUMENTS

148
(FIVE YEARS 43)

H-INDEX

19
(FIVE YEARS 2)

Cognition ◽  
2022 ◽  
Vol 219 ◽  
pp. 104967
Author(s):  
Christine Nussbaum ◽  
Celina I. von Eiff ◽  
Verena G. Skuk ◽  
Stefan R. Schweinberger

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261354
Author(s):  
Mattias Ekberg ◽  
Josefine Andin ◽  
Stefan Stenfelt ◽  
Örjan Dahlström

Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Li

In order to improve students’ learning effect, more and more universities favor foreign language teachers who are native speakers of English. Based on the analysis and summary of the research status of emotion recognition, this paper proposes that, in college English classroom teaching, foreign language teachers can reduce the communication barriers with Chinese students through emotion recognition. Based on literature review and actual situation investigation, this study identified four influencing factors on emotion recognition of foreign language teachers, namely, interactive action, facial expression, vocal emotion, and body posture. In our opinion, in the teaching process, teachers can adjust the four factors of emotion recognition to achieve better teaching effect. Further, improve students’ learning efficiency. Analytic Hierarchy Process (AHP) is chosen as the research method in this study. After building the analysis model, we collected the questionnaire using the Questionnaire Star, and finally got 12 valid data. After determining the importance of different factors by pairwise comparison, we draw the following conclusions: the influence degree of emotion recognition factors of foreign language teachers is in descending order, interactive action (43%), facial expression (28%), vocal emotion (21%), and body posture (9%). Our research adds to the body of knowledge on emotion recognition among college English teachers. Furthermore, this research assists students in improving their grasp of course content based on the emotions of foreign English lecturers. Based on the findings, we recommend that foreign language teachers in college English classrooms alter their interactive behaviors, facial expressions, and vocal emotions in response to various instructional materials and emphases.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Leonor Neves ◽  
Marta Martins ◽  
Ana Isabel Correia ◽  
São Luís Castro ◽  
César F. Lima

The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yuyang Wang ◽  
Lili Liu ◽  
Ying Zhang ◽  
Chaogang Wei ◽  
Tianyu Xin ◽  
...  

As elucidated by prior research, children with hearing loss have impaired vocal emotion recognition compared with their normal-hearing peers. Cochlear implants (CIs) have achieved significant success in facilitating hearing and speech abilities for people with severe-to-profound sensorineural hearing loss. However, due to the current limitations in neuroimaging tools, existing research has been unable to detail the neural processing for perception and the recognition of vocal emotions during early stage CI use in infant and toddler CI users (ITCI). In the present study, functional near-infrared spectroscopy (fNIRS) imaging was employed during preoperative and postoperative tests to describe the early neural processing of perception in prelingual deaf ITCIs and their recognition of four vocal emotions (fear, anger, happiness, and neutral). The results revealed that the cortical response elicited by vocal emotional stimulation on the left pre-motor and supplementary motor area (pre-SMA), right middle temporal gyrus (MTG), and right superior temporal gyrus (STG) were significantly different between preoperative and postoperative tests. These findings indicate differences between the preoperative and postoperative neural processing associated with vocal emotional stimulation. Further results revealed that the recognition of vocal emotional stimuli appeared in the right supramarginal gyrus (SMG) after CI implantation, and the response elicited by fear was significantly greater than the response elicited by anger, indicating a negative bias. These findings indicate that the development of emotional bias and the development of emotional perception and recognition capabilities in ITCIs occur on a different timeline and involve different neural processing from those in normal-hearing peers. To assess the speech perception and production abilities, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and Speech Intelligibility Rating (SIR) were used. The results revealed no significant differences between preoperative and postoperative tests. Finally, the correlates of the neurobehavioral results were investigated, and the results demonstrated that the preoperative response of the right SMG to anger stimuli was significantly and positively correlated with the evaluation of postoperative behavioral outcomes. And the postoperative response of the right SMG to anger stimuli was significantly and negatively correlated with the evaluation of postoperative behavioral outcomes.


2021 ◽  
Author(s):  
Nicholas Edward Souter ◽  
Kristen A Lindquist ◽  
Beth Jefferies

According to a constructionist model of emotion, conceptual knowledge plays a foundational role in emotion perception; reduced availability of relevant conceptual knowledge should therefore impair emotion perception. Conceptual deficits can follow both degradation of semantic knowledge (e.g., semantic ‘storage’ deficits in semantic dementia) and deregulation of retrieval (e.g., semantic ‘access’ deficits in semantic aphasia). While emotion recognition deficits are known to accompany degraded conceptual knowledge, less is known about the impact of semantic access deficits. Here, we examined emotion perception and categorization tasks in patients with semantic aphasia, who have difficulty accessing semantic information in a flexible and controlled fashion following left hemisphere stroke. In Study 1, participants were asked to sort faces according to the emotion they portrayed – with numbers, written labels and picture examples each provided as category anchors across tasks. Semantic aphasia patients made more errors and showed a larger benefit from word anchors that reduced the need to internally constrain categorization than age-matched controls. They successfully sorted portrayals that differed in valance (positive vs. negative) but had difficulty categorizing different negative emotions. In Study 2, participants matched facial emotion portrayals to written labels following vocal emotion prosody cues, miscues, or no cues. Patients presented with overall poorer performance and benefited from cue trials relative to within-valence miscue trials. This same effect was seen in controls, who also showed deleterious effects of within-valence miscue relative to no cue trials. Overall we found that patients with deregulated semantic retrieval have deficits in emotional perception but that word anchors and cue conditions can facilitate emotion perception by increasing access to relevant emotion concepts and reducing reliance on semantic control. Semantic control may be of particular importance in emotion perception when it is necessary to interpret ambiguous inputs, or when there is interference between conceptually similar emotional states. These findings extend constructionist accounts of emotion to encompass difficulties in controlled semantic retrieval.


2021 ◽  
Vol 13 (3) ◽  
pp. 211-224
Author(s):  
Christine Nussbaum ◽  
Stefan R. Schweinberger

Links between musicality and vocal emotion perception skills have only recently emerged as a focus of study. Here we review current evidence for or against such links. Based on a systematic literature search, we identified 33 studies that addressed either (a) vocal emotion perception in musicians and nonmusicians, (b) vocal emotion perception in individuals with congenital amusia, (c) the role of individual differences (e.g., musical interests, psychoacoustic abilities), or (d) effects of musical training interventions on both the normal hearing population and cochlear implant users. Overall, the evidence supports a link between musicality and vocal emotion perception abilities. We discuss potential factors moderating the link between emotions and music, and possible directions for future research.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Leonardo Ceravolo ◽  
Sascha Frühholz ◽  
Jordan Pierce ◽  
Didier Grandjean ◽  
Julie Péron

AbstractUntil recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia—especially between the putamen and external globus pallidus—and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.


2021 ◽  
pp. 102690
Author(s):  
Marine Thomasson ◽  
Damien Benis ◽  
Arnaud Saj ◽  
Philippe Voruz ◽  
Roberta Ronchi ◽  
...  

2021 ◽  
Author(s):  
Kristina Woodard ◽  
Rista C. Plate ◽  
Michele Morningstar ◽  
Adrienne Wood ◽  
Seth D. Pollak
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document