Neurophysiological evidence (ERPs) for hemispheric processing of facial expressions of emotions: Evidence from whole face and chimeric face stimuli

2017 ◽  
Vol 23 (3) ◽  
pp. 318-343 ◽  
Author(s):  
Nikoleta Damaskinou ◽  
Dawn Watling
2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.


2020 ◽  
Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

Abstract. In this experiment, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multilab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the “pen-in-the-teeth” condition, participants tended to lower their threshold of perception of happy expressions in facial stimuli compared to the “no-pen” condition, thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive happy walkers in the pen-in-the-teeth condition compared to the no-pen condition. This pattern of results was also found in a second experiment in which the no-pen condition was replaced by a situation in which participants held a pen in their lips (“pen-in-lips” condition). These results suggested that facial muscular activity alters the recognition of not only facial expressions but also bodily expressions.


2020 ◽  
Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

In this research, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multi-lab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the ‘pen-in-the-teeth’ condition, participants tended to lower their threshold of perception of ‘happy’ expressions in facial stimuli compared to the ‘no-pen’ condition; thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive ‘happy’ walkers in the ‘pen-in-the-teeth’ compared to the ‘no-pen’ condition. This pattern of results was also found in a second experiment in which the ‘no-pen’ condition was replaced by a situation in which participants held a pen in their lips (‘pen-in-lips’ condition). These results suggested that facial muscular activity not only alters the recognition of facial expressions but also bodily expression.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Soichiro Matsuda ◽  
Yasuyo Minagawa ◽  
Junichi Yamamoto

Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Teresa Sollfrank ◽  
Oona Kohnen ◽  
Peter Hilfiker ◽  
Lorena C. Kegel ◽  
Hennric Jokeit ◽  
...  

This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli (”avatars”) in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.


2015 ◽  
Vol 22 (12) ◽  
pp. 1123-1130 ◽  
Author(s):  
Orrie Dan ◽  
Sivan Raz

Objective: The present study investigated differences in emotional face processing between adolescents (age 15-18) with ADHD-Combined type (ADHD-CT) and typically developing controls. Method: Participants completed a visual emotional task in which they were asked to rate the degree of negativity/positivity of four facial expressions (taken from the NimStim face stimulus set). Results: Participants’ ratings, ratings’ variability, response times (RTs), and RTs’ variability were analyzed. Results showed a significant interaction between group and the type of presented stimuli. Adolescents with ADHD-CT discriminated less between positive and negative emotional expressions compared with those without ADHD. In addition, adolescents with ADHD-CT exhibited greater variability in their RTs and in their ratings of facial expressions when compared with controls. Conclusion: The present results lend further support to the existence of a specific deficit or alteration in the processing of emotional face stimuli among adolescents with ADHD-CT.


Sign in / Sign up

Export Citation Format

Share Document