scholarly journals Modulation of early level EEG signatures by distributed facial emotion cues

2021 ◽  
Author(s):  
Diana Costa ◽  
Camila Dias ◽  
Teresa Sousa ◽  
João Estiveira ◽  
João Castelhano ◽  
...  

AbstractFace perception plays an important role in our daily social interactions, as it is essential to recognize emotions. The N170 Event Related Potential (ERP) component has been widely identified as a major face-sensitive neuronal marker. However, despite extensive investigations conducted to examine this electroencephalographic pattern, there is yet no agreement regarding its sensitivity to the content of facial expressions.Here, we aim to clarify the EEG signatures of the recognition of facial expressions by investigating ERP components that we hypothesize to be associated with this cognitive process. We asked the question whether the recognition of facial expressions is encoded by the N170 as weel as at the level of P100 and P250. In order to test this hypothesis, we analysed differences in amplitudes and latencies for the three ERPs, in a sample of 20 participants. A visual paradigm requiring explicit recognition of happy, sad and neutral faces was used. The facial cues were explicitly controlled to vary only regarding mouth and eye components. We found that non neutral emotion expressions elicit a response difference in the amplitude of N170 and P250. In contrast with the P100, there by excluding a role for low level factors.Our study brings new light to the controversy whether emotional face expressions modulate early visual response components, which have been often analysed apart. The results support the tenet that neutral and emotional faces evoke distinct N170 patterns, but go further by revealing that this is also true for P250, unlike the P100.

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878315 ◽  
Author(s):  
Nicole Lazzeri ◽  
Daniele Mazzei ◽  
Maher Ben Moussa ◽  
Nadia Magnenat-Thalmann ◽  
Danilo De Rossi

Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.


2021 ◽  
Author(s):  
Andry Chowanda

Abstract Social interactions are important for us, human, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared to the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6-71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


2019 ◽  
Author(s):  
Sarah Aied Alharbi ◽  
Benedict C Jones

Facial expressions of emotion play an important role in social interactions. Recent work has suggested that experimentally increasing body-weight cues makes faces displaying happy expressions look happier and makes faces displaying sad expressions look sadder. These results were interpreted as evidence that a ‘heavy people are happier’ stereotype influences emotion perception. Because this original study was carried out at a university in the USA, and emotion perceptions can differ across cultures, we undertook a conceptual replication of this study in an Arab sample. We found that experimentally increasing body-weight cues made faces displaying happy expressions look significantly happier, but did not make faces displaying sad expressions look significantly sadder. These results present partial support for the proposal that a ‘heavy people are happier’ stereotype influences emotion perception and that people integrate information from face shape and facial expressions in person perception.


2020 ◽  
Author(s):  
Fernando Ferreira-Santos ◽  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

The behavioral and electrophysiological study of the emotional intensity of facial expressions of emotions has relied on image processing techniques termed ‘morphing’ to generate realistic facial stimuli in which emotional intensity can be manipulated. This is achieved by blending neutral and emotional facial displays and treating the percent of morphing between the two stimuli as an objective measure of emotional intensity. Here we argue that the percentage of morphing between stimuli does not provide an objective measure of emotional intensity and present supporting evidence from affective ratings and neural (event-related potential) responses. We show that 50% morphs created from high or moderate arousal stimuli differ in subjective and neural responses in a sensible way: 50% morphs are perceived as having approximately half of the emotional intensity of the original stimuli, but if the original stimuli differed in emotional intensity to begin with, then so will the morphs. We suggest a re-examination of previous studies that used percentage of morphing as a measure of emotional intensity and highlight the value of more careful experimental control of emotional stimuli and inclusion of proper manipulation checks.


2015 ◽  
Vol 47 (8) ◽  
pp. 963 ◽  
Author(s):  
Dandan ZHANG ◽  
Ting ZHAO ◽  
Yunzhe LIU ◽  
Yuming CHEN

2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Sign in / Sign up

Export Citation Format

Share Document