emotion representation
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Vol 12 ◽  
Author(s):  
Huiyun Li ◽  
Luyan Ji ◽  
Qitian Li ◽  
Wenfeng Chen

Individuals can perceive the mean emotion or mean identity of a group of faces. It has been considered that individual representations are discarded when extracting a mean representation; for example, the “element-independent assumption” asserts that the extraction of a mean representation does not depend on recognizing or remembering individual items. The “element-dependent assumption” proposes that the extraction of a mean representation is closely connected to the processing of individual items. The processing mechanism of mean representations and individual representations remains unclear. The present study used a classic member-identification paradigm and manipulated the exposure time and set size to investigate the effect of attentional resources allocated to individual faces on the processing of both the mean emotion representation and individual representations in a set and the relationship between the two types of representations. The results showed that while the precision of individual representations was affected by attentional resources, the precision of the mean emotion representation did not change with it. Our results indicate that two different pathways may exist for extracting a mean emotion representation and individual representations and that the extraction of a mean emotion representation may have higher priority. Moreover, we found that individual faces in a group could be processed to a certain extent even under extremely short exposure time and that the precision of individual representations was relatively poor but individual representations were not discarded.


2021 ◽  
Vol 8 (10) ◽  
Author(s):  
Christina O. Carlisi ◽  
Kyle Reed ◽  
Fleur G. L. Helmink ◽  
Robert Lachlan ◽  
Darren P. Cosker ◽  
...  

Emotional facial expressions critically impact social interactions and cognition. However, emotion research to date has generally relied on the assumption that people represent categorical emotions in the same way, using standardized stimulus sets and overlooking important individual differences. To resolve this problem, we developed and tested a task using genetic algorithms to derive assumption-free, participant-generated emotional expressions. One hundred and five participants generated a subjective representation of happy, angry, fearful and sad faces. Population-level consistency was observed for happy faces, but fearful and sad faces showed a high degree of variability. High test–retest reliability was observed across all emotions. A separate group of 108 individuals accurately identified happy and angry faces from the first study, while fearful and sad faces were commonly misidentified. These findings are an important first step towards understanding individual differences in emotion representation, with the potential to reconceptualize the way we study atypical emotion processing in future research.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xiaodong Liu ◽  
Songyang Li ◽  
Miao Wang

The context, such as scenes and objects, plays an important role in video emotion recognition. The emotion recognition accuracy can be further improved when the context information is incorporated. Although previous research has considered the context information, the emotional clues contained in different images may be different, which is often ignored. To address the problem of emotion difference between different modes and different images, this paper proposes a hierarchical attention-based multimodal fusion network for video emotion recognition, which consists of a multimodal feature extraction module and a multimodal feature fusion module. The multimodal feature extraction module has three subnetworks used to extract features of facial, scene, and global images. Each subnetwork consists of two branches, where the first branch extracts the features of different modes, and the other branch generates the emotion score for each image. Features and emotion scores of all images in a modal are aggregated to generate the emotion feature of the modal. The other module takes multimodal features as input and generates the emotion score for each modal. Finally, features and emotion scores of multiple modes are aggregated, and the final emotion representation of the video will be produced. Experimental results show that our proposed method is effective on the emotion recognition dataset.


2020 ◽  
Author(s):  
Meng Liu ◽  
Yaocong Duan ◽  
Robin A A Ince ◽  
Chaona Chen ◽  
Oliver G. B. Garrod ◽  
...  

One of the longest standing debates in the emotion sciences is whether emotions are represented as discrete categories such as happy or sad or as continuous fundamental dimensions such as valence and arousal. Theories of communication make specific predictions about the facial expression signals that would represent emotions as either discrete or dimensional messages. Here, we address this debate by testing whether facial expressions of emotion categories are embedded in a dimensional space of affective signals, leading to multiplexed communication of affective information. Using a data-driven method based on human perception, we modelled the facial expressions representing the six classic emotion categories – happy, surprise, fear, disgust, anger and sad – and those representing the dimensions of valence and arousal. We then evaluated their embedding by mapping and validating the facial expressions categories onto the valence-arousal space. Results showed that facial expressions of these six classic emotion categories formed dissociable clusters within the valence-arousal space, each located in semantically congruent regions (e.g., happy facial expressions distributed in positively valenced regions). Crucially, we further demonstrated the generalization of the embedding beyond the six classic categories, using a broader set of 19 complex emotion categories (e.g., delighted, fury, and terrified). Together, our results show that facial expressions of emotion categories comprise specific combinations of valence and arousal related face movements, suggesting a multiplexed signalling of categorical and dimensional affective information. Our results unite current theories of emotion representation to form the basis of a new framework of multiplexed communication of affective information.


2020 ◽  
Author(s):  
Giuseppe Marrazzo ◽  
Maarten J. Vaessen ◽  
Beatrice de Gelder

AbstractRecent studies provided an increasingly detailed understanding of how visual objects categories like faces or bodies are represented in the brain. What is less clear is how a given task impacts the representation of the object category and of its attributes. Using (fMRI) we measured BOLD responses while participants viewed whole body expressions and alternatively performed an explicit (emotion) or an implicit (shape) recognition task. Our results based on multivariate methods, show that the type of task is the strongest determinant of brain activity and can be decoded in EBA, VLPFC and IPL. Brain activity was higher for the explicit task condition in the first two areas without evidence of emotion specificity. This pattern indicates that during explicit recognition of the body expression, body category representation may be strengthened, and emotion and action related activity suppressed. Taken together these results indicate that there is important task dependent activity in prefrontal, inferior parietal but also ventral visual areas and point to the importance of the task both when investigating category selectivity and brain correlates of affective processes.


2020 ◽  
Vol 11 (2) ◽  
pp. 1-18
Author(s):  
Rana Fathalla

Emotion modeling has gained attention for almost two decades now due to the rapid growth of affective computing (AC). AC aims to detect and respond to the end-user's emotions by devices and computers. Despite the hard efforts being directed to emotion modeling with numerous tries to build different models of emotions, emotion modeling remains an art with a lack of consistency and clarity regarding the exact meaning of emotion modeling. This review deconstructs the vagueness of the term ‘emotion modeling' by discussing the various types and categories of emotion modeling, including computational models and its categories—emotion generation and emotion effects—and emotion representation models and its categories—categorical, dimensional, and componential models. This review deals with applications associated with each type of emotion model including artificial intelligence and robotics architecture, computer-human interaction applications of the computational models, and emotion classification and affect-aware applications such as video games and tutoring systems applications of emotion representation models.


2019 ◽  
Vol 17 (3) ◽  
pp. 299-303
Author(s):  
Björn Schuller

Purpose Uncertainty is an under-respected issue when it comes to automatic assessment of human emotion by machines. The purpose of this paper is to highlight the existent approaches towards such measurement of uncertainty, and identify further research need. Design/methodology/approach The discussion is based on a literature review. Findings Technical solutions towards measurement of uncertainty in automatic emotion recognition (AER) exist but need to be extended to respect a range of so far underrepresented sources of uncertainty. These then need to be integrated into systems available to general users. Research limitations/implications Not all sources of uncertainty in automatic emotion recognition (AER) including emotion representation and annotation can be touched upon in this communication. Practical implications AER systems shall be enhanced by more meaningful and complete information provision on the uncertainty underlying their estimates. Limitations of their applicability should be communicated to users. Social implications Users of automatic emotion recognition technology will become aware of their limitations, potentially leading to a fairer usage in crucial application context. Originality/value There is no previous discussion including the technical view point on extended uncertainty measurement in automatic emotion recognition.


2018 ◽  
Author(s):  
Naoko Koide-Majima ◽  
Tomoya Nakai ◽  
Shinji Nishimoto

AbstractWe experience a rich variety of emotions in daily life. While previous emotion studies focused on only a few predefined, restricted emotional states, a recent psychological study found a rich emotional representation in humans using a large set of diverse human-behavioural data. However, no representation of emotional states in the brain using emotion labels has been established on such a scale. To examine that, we used functional MRI to measure blood-oxygen-level-dependent (BOLD) responses when human subjects watched 3-h emotion-inducing movies labelled with 10,800 ratings regarding each of 80 emotion categories. By quantifying canonical correlations between BOLD responses and emotion ratings for the movie scenes, we found 25 significant dimensions of emotion representation in the brain. Then, we constructed a semantic space of the emotion representation and mapped the emotion categories on the cortical surface. We found that the emotion categories were smoothly represented from unimodal to transmodal regions on the cortical surface. This paper presents a cortical representation of a rich variety of emotion categories, which covers most of the emotional states suggested in traditional theories.


Author(s):  
Sicheng Zhao ◽  
Guiguang Ding ◽  
Qingming Huang ◽  
Tat-Seng Chua ◽  
Björn W. Schuller ◽  
...  

Images can convey rich semantics and induce strong emotions in viewers. Recently, with the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA). In this paper, we review the state-of-the-art methods comprehensively with respect to two main challenges -- affective gap and perception subjectivity. We begin with an introduction to the key emotion representation models that have been widely employed in AICA. Available existing datasets for performing evaluation are briefly described. We then summarize and compare the representative approaches on emotion feature extraction, personalized emotion prediction, and emotion distribution learning. Finally, we discuss some future research directions.


Sign in / Sign up

Export Citation Format

Share Document