scholarly journals Posterior Fusiform and Midfusiform Contribute to Distinct Stages of Facial Expression Processing

2018 ◽  
Vol 29 (7) ◽  
pp. 3209-3219 ◽  
Author(s):  
Yuanning Li ◽  
R Mark Richardson ◽  
Avniel Singh Ghuman

Abstract Though the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity and in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior fusiform and midfusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and midfusiform showing a later and extended peak between 230 and 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior fusiform and midfusiform, with each contributing to temporally segregated stages of expression perception.

2018 ◽  
Author(s):  
Yuanning Li ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

AbstractThough the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity, in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior and mid-fusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and mid-fusiform showing a later and extended peak between 230 – 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior and mid-fusiform, with each contributing to temporally segregated stages of expression perception.


2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


2019 ◽  
Vol 9 (5) ◽  
pp. 116 ◽  
Author(s):  
Luis Aguado ◽  
Karisa Parkington ◽  
Teresa Dieguez-Risco ◽  
José Hinojosa ◽  
Roxane Itier

Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.


2014 ◽  
Vol 27 (2) ◽  
pp. 127-137 ◽  
Author(s):  
Kazumichi Matsumiya

The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sign in / Sign up

Export Citation Format

Share Document