scholarly journals From neutral to emotional: The Facial Expression Intensity Test (FExIT)

2021 ◽  
Author(s):  
CAYOL Zoé ◽  
Tatjana Nazir

The Facial Expression Intensity Test (FExIT) measures the level of perceived intensity ofemotional cues in a given facial expression. The test consists of a series of faces takenfrom the NimStim set (Tottenham et al., 2009) whose expressions vary from a neutralexpression to one of the six basic emotions, with ten levels of morphing. The FExIT isvalidated by means of an emotion-related ERP component (i.e., the early posteriornegativity, EPN), which shows a systematic modulation of its amplitude with the level ofexpression intensity. The participant’s task in this test is to identify the expressedemotion among 8 options (i.e. the six basic emotions, a “neutral” and a “I don't know”option). The task is not timed. The score of the FExIT is either the proportion of correctlyidentified emotions, or the proportion of the attribution of an emotion to the facialstimulus (i.e. the attribution of any emotion but not “neutral” or “I don’t know”). Giventhat the facial expression intensity varies continuously from low to high, the FExIT allowsthe determination and comparison of threshold levels for correct responses. The freelyaccessible set of the 700 facial stimuli for the test is divided into two equivalent face lists,which further allows for pretest/posttest experimental designs. The test takesapproximately 25 min to complete and is simple to administer. The FExIT is thus a usefulinstrument for testing different experimental settings and populations.

2015 ◽  
Vol 21 (7) ◽  
pp. 568-572 ◽  
Author(s):  
Isabelle Chiu ◽  
Regina I. Gfrörer ◽  
Olivier Piguet ◽  
Manfred Berres ◽  
Andreas U. Monsch ◽  
...  

AbstractThe importance of including measures of emotion processing, such as tests of facial emotion recognition (FER), as part of a comprehensive neuropsychological assessment is being increasingly recognized. In clinical settings, FER tests need to be sensitive, short, and easy to administer, given the limited time available and patient limitations. Current tests, however, commonly use stimuli that either display prototypical emotions, bearing the risk of ceiling effects and unequal task difficulty, or are cognitively too demanding and time-consuming. To overcome these limitations in FER testing in patient populations, we aimed to define FER threshold levels for the six basic emotions in healthy individuals. Forty-nine healthy individuals between 52 and 79 years of age were asked to identify the six basic emotions at different intensity levels (25%, 50%, 75%, 100%, and 125% of the prototypical emotion). Analyses uncovered differing threshold levels across emotions and sex of facial stimuli, ranging from 50% up to 100% intensities. Using these findings as “healthy population benchmarks”, we propose to apply these threshold levels to clinical populations either as facial emotion recognition or intensity rating tasks. As part of any comprehensive social cognition test battery, this approach should allow for a rapid and sensitive assessment of potential FER deficits. (JINS, 2015, 21, 568–572)


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xueping Su ◽  
Meng Gao ◽  
Jie Ren ◽  
Yunhong Li ◽  
Matthias Rätsch

With the continuous development of economy, consumers pay more attention to the demand for personalization clothing. However, the recommendation quality of the existing clothing recommendation system is not enough to meet the user’s needs. When browsing online clothing, facial expression is the salient information to understand the user’s preference. In this paper, we propose a novel method to automatically personalize clothing recommendation based on user emotional analysis. Firstly, the facial expression is classified by multiclass SVM. Next, the user’s multi-interest value is calculated using expression intensity that is obtained by hybrid RCNN. Finally, the multi-interest value is fused to carry out personalized recommendation. The experimental results show that the proposed method achieves a significant improvement over other algorithms.


2018 ◽  
Vol 17 (4) ◽  
pp. 407-432 ◽  
Author(s):  
Dušan Stamenković ◽  
Miloš Tasić ◽  
Charles Forceville

In Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels (2006), Scott McCloud proposes that the use of specific drawing techniques will enable viewers to reliably deduce different degrees of intensity of the six basic emotions from facial expressions in comics. Furthermore, he suggests that an accomplished comics artist can combine the components of facial expressions conveying the basic emotions to produce complex expressions, many of which are supposedly distinct and recognizable enough to be named. This article presents an empirical investigation and assessment of the validity of these claims, based on the results obtained from three questionnaires. Each of the questionnaires deals with one of the aspects of McCloud’s proposal: face expression intensity, labelling and compositionality. The data show that the tasks at hand were much more difficult than would have been expected on the basis of McCloud’s proposal, with the intensity matching task being the most successful of the three.


2020 ◽  
Vol 528 ◽  
pp. 113-132
Author(s):  
Mingliang Xue ◽  
Xiaodong Duan ◽  
Wanquan Liu ◽  
Yan Ren

2019 ◽  
Vol 9 (16) ◽  
pp. 3379
Author(s):  
Hyun-Jun Hyung ◽  
Han Ul Yoon ◽  
Dongwoon Choi ◽  
Duk-Yeon Lee ◽  
Dong-Wook Lee

Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.


Sign in / Sign up

Export Citation Format

Share Document