Video game exposure and recognition of facial expressions: Event-related potential data

2014 ◽  
Vol 94 (2) ◽  
pp. 167
Author(s):  
Yoshiyuki Tamamiya ◽  
Hiraki Kazuo
2021 ◽  
Author(s):  
Diana Costa ◽  
Camila Dias ◽  
Teresa Sousa ◽  
João Estiveira ◽  
João Castelhano ◽  
...  

AbstractFace perception plays an important role in our daily social interactions, as it is essential to recognize emotions. The N170 Event Related Potential (ERP) component has been widely identified as a major face-sensitive neuronal marker. However, despite extensive investigations conducted to examine this electroencephalographic pattern, there is yet no agreement regarding its sensitivity to the content of facial expressions.Here, we aim to clarify the EEG signatures of the recognition of facial expressions by investigating ERP components that we hypothesize to be associated with this cognitive process. We asked the question whether the recognition of facial expressions is encoded by the N170 as weel as at the level of P100 and P250. In order to test this hypothesis, we analysed differences in amplitudes and latencies for the three ERPs, in a sample of 20 participants. A visual paradigm requiring explicit recognition of happy, sad and neutral faces was used. The facial cues were explicitly controlled to vary only regarding mouth and eye components. We found that non neutral emotion expressions elicit a response difference in the amplitude of N170 and P250. In contrast with the P100, there by excluding a role for low level factors.Our study brings new light to the controversy whether emotional face expressions modulate early visual response components, which have been often analysed apart. The results support the tenet that neutral and emotional faces evoke distinct N170 patterns, but go further by revealing that this is also true for P250, unlike the P100.


2020 ◽  
Author(s):  
Fernando Ferreira-Santos ◽  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

The behavioral and electrophysiological study of the emotional intensity of facial expressions of emotions has relied on image processing techniques termed ‘morphing’ to generate realistic facial stimuli in which emotional intensity can be manipulated. This is achieved by blending neutral and emotional facial displays and treating the percent of morphing between the two stimuli as an objective measure of emotional intensity. Here we argue that the percentage of morphing between stimuli does not provide an objective measure of emotional intensity and present supporting evidence from affective ratings and neural (event-related potential) responses. We show that 50% morphs created from high or moderate arousal stimuli differ in subjective and neural responses in a sensible way: 50% morphs are perceived as having approximately half of the emotional intensity of the original stimuli, but if the original stimuli differed in emotional intensity to begin with, then so will the morphs. We suggest a re-examination of previous studies that used percentage of morphing as a measure of emotional intensity and highlight the value of more careful experimental control of emotional stimuli and inclusion of proper manipulation checks.


2015 ◽  
Vol 47 (8) ◽  
pp. 963 ◽  
Author(s):  
Dandan ZHANG ◽  
Ting ZHAO ◽  
Yunzhe LIU ◽  
Yuming CHEN

2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
Junjie Pan ◽  
...  

Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.


Sign in / Sign up

Export Citation Format

Share Document