scholarly journals Left Hemisphere Dominance for Negative Facial Expressions: The Influence of Task

2021 ◽  
Vol 15 ◽  
Author(s):  
E. Darcy Burgund

Major theories of hemisphere asymmetries in facial expression processing predict right hemisphere dominance for negative facial expressions of disgust, fear, and sadness, however, some studies observe left hemisphere dominance for one or more of these expressions. Research suggests that tasks requiring the identification of six basic emotional facial expressions (angry, disgusted, fearful, happy, sad, and surprised) are more likely to produce left hemisphere involvement than tasks that do not require expression identification. The present research investigated this possibility in two experiments that presented six basic emotional facial expressions to the right or left hemisphere using a divided-visual field paradigm. In Experiment 1, participants identified emotional expressions by pushing a key corresponding to one of six labels. In Experiment 2, participants detected emotional expressions by pushing a key corresponding to whether an expression was emotional or not. In line with predictions, fearful facial expressions exhibited a left hemisphere advantage during the identification task but not during the detection task. In contrast to predictions, sad expressions exhibited a left hemisphere advantage during both identification and detection tasks. In addition, happy facial expressions exhibited a left hemisphere advantage during the detection task but not during the identification task. Only angry facial expressions exhibited a right hemisphere advantage, and this was only observed when data from both experiments were combined. Together, results highlight the influence of task demands on hemisphere asymmetries in facial expression processing and suggest a greater role for the left hemisphere in negative expressions than predicted by previous theories.

2018 ◽  
Vol 24 (7) ◽  
pp. 673-683 ◽  
Author(s):  
Frédérike Carrier-Toutant ◽  
Samuel Guay ◽  
Christelle Beaulieu ◽  
Édith Léveillé ◽  
Alexandre Turcotte-Giroux ◽  
...  

AbstractObjectives: Concussions affect the processing of emotional stimuli. This study aimed to investigate how sex interacts with concussion effects on early event-related brain potentials (ERP) measures (P1, N1) of emotional facial expressions (EFE) processing in asymptomatic, multi-concussion athletes during an EFE identification task. Methods: Forty control athletes (20 females and 20 males) and 43 multi-concussed athletes (22 females and 21 males), recruited more than 3 months after their last concussion, were tested. Participants completed the Beck Depression Inventory II, the Beck Anxiety Inventory, the Post-Concussion Symptom Scale, and an Emotional Facial Expression Identification Task. Pictures of male and female faces expressing neutral, angry, and happy emotions were randomly presented and the emotion depicted had to be identified as fast as possible during EEG acquisition. Results: Relative to controls, concussed athletes of both sex exhibited a significant suppression of P1 amplitude recorded from the dominant right hemisphere while performing the emotional face expression identification task. The present study also highlighted a sex-specific suppression of the N1 component amplitude after concussion which affected male athletes. Conclusions: These findings suggest that repeated concussions alter the typical pattern of right-hemisphere response dominance to EFE in early stages of EFE processing and that the neurophysiological mechanisms underlying the processing of emotional stimuli are distinctively affected across sex. (JINS, 2018, 24, 1–11)


Author(s):  
Maida Koso-Drljević ◽  
Meri Miličević

The aim of the study was to test two assumptions about the lateralization of the processing of emotional facial expressions: the assumption of right hemisphere dominance and the valence assumption and to egsamine the influence of gender of the presented stimulus (chimera) and depression as an emotional state of participants. The sample consisted of 83 female students, with an average age of 20 years. Participants solved the Task of Recognizing Emotional Facial Expressions on a computer and then completed the DASS-21, Depression subscale. The results of the study partially confirmed the assumption of valence for the dependent variable - the accuracy of the response. Participants were recognizing more accurately the emotion of sadness than happiness when it is presented on the left side of the face, which is consistent with the valence hypothesis, according to which the right hemisphere is responsible for recognizing negative emotions. However, when it comes to the right side of the face, participants were equally accurately recognizing the emotion of sadness and happiness, which is not consistent with the valence hypothesis. The main effect of the gender of the chimera was statistically significant for the accuracy of the response, the recognition accuracy was higher for the male chimeras compared to the female. A statistically significant negative correlation was obtained between the variable sides of the face (left and right) with the achieved result on the depression subscale for the dependent variable - reaction time. The higher the score on the depressive subscale, the slower (longer) is reaction time to the presented chimera, both on the left and on the right.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


Perception ◽  
2020 ◽  
Vol 49 (8) ◽  
pp. 822-834
Author(s):  
Vivian Lee ◽  
Natasha Da Silva ◽  
M. D. Rutherford

Adults perceive a continuum of emotional facial expressions categorically rather than continuously. Categorical perception is thought to be adaptive and functional, allowing for inferences that inform behavior. To date, studies have demonstrated categorical perception of some emotional facial expressions in infants. However, a recent study reported that 12-month-olds infants do not perceive facial emotional expressions categorically across a happy–sad continuum. In contrast, toddlers at 3.5 years of age appear to use categorical perception along the happy–sad continuum. Using a novel paradigm that employed the use of a looking-time discrimination task and an explicit identification task, this study measured 26-month-old’s identification of faces and ability to discriminate between faces along a happy–sad continuum. Results suggest that 26-month-olds perceive facial expressions categorically along the happy–sad continuum.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 28-28
Author(s):  
A J Calder ◽  
A W Young ◽  
D Rowland ◽  
D R Gibbenson ◽  
B M Hayes ◽  
...  

G Rhodes, S E Brennan, S Carey (1987 Cognitive Psychology19 473 – 497) and P J Benson and D I Perrett (1991 European Journal of Cognitive Psychology3 105 – 135) have shown that computer-enhanced (caricatured) representations of familiar faces are named faster and rated as better likenesses than veridical (undistorted) representations. Here we have applied Benson and Perrett's graphic technique to examine subjects' perception of enhanced representations of photographic-quality facial expressions of basic emotions. To enhance a facial expression the target face is compared to a norm or prototype face, and, by exaggerating the differences between the two, a caricatured image is produced; reducing the differences results in an anticaricatured image. In experiment 1 we examined the effect of degree of caricature and types of norm on subjects' ratings for ‘intensity of expression’. Three facial expressions (fear, anger, and sadness) were caricatured at seven levels (−50%, −30%, −15%, 0%, +15%, +30%, and +50%) relative to three different norms; (1) an average norm prepared by blending pictures of six different emotional expressions; (2) a neutral expression norm; and (3) a different expression norm (eg anger caricatured relative to a happy expression). Irrespective of norm, the caricatured expressions were rated as significantly more intense than the veridical images. Furthermore, for the average and neutral norm sets, the anticaricatures were rated as significantly less intense. We also examined subjects' reaction times to recognise caricatured (−50%, 0%, and +50%) representations of six emotional facial expressions. The results showed that the caricatured images were identified fastest, followed by the veridical, and then anticaricatured images. Hence the perception of facial expression and identity is facilitated by caricaturing; this has important implications for the mental representation of facial expressions.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Author(s):  
Peggy Mason

Tracts descending from motor control centers in the brainstem and cortex target motor interneurons and in select cases motoneurons. The mechanisms and constraints of postural control are elaborated and the effect of body mass on posture discussed. Feed-forward reflexes that maintain posture during standing and other conditions of self-motion are described. The role of descending tracts in postural control and the pathological posturing is described. Pyramidal (corticospinal and corticobulbar) and extrapyramidal control of body and face movements is contrasted. Special emphasis is placed on cortical regions and tracts involved in deliberate control of facial expression; these pathways are contrasted with mechanisms for generating emotional facial expressions. The signs associated with lesions of either motoneurons or motor control centers are clearly detailed. The mechanisms and presentation of cerebral palsy are described. Finally, understanding how pre-motor cortical regions generate actions is used to introduce apraxia, a disorder of action.


2007 ◽  
Vol 38 (10) ◽  
pp. 1475-1483 ◽  
Author(s):  
K. S. Kendler ◽  
L. J. Halberstadt ◽  
F. Butera ◽  
J. Myers ◽  
T. Bouchard ◽  
...  

BackgroundWhile the role of genetic factors in self-report measures of emotion has been frequently studied, we know little about the degree to which genetic factors influence emotional facial expressions.MethodTwenty-eight pairs of monozygotic (MZ) and dizygotic (DZ) twins from the Minnesota Study of Twins Reared Apart were shown three emotion-inducing films and their facial responses recorded. These recordings were blindly scored by trained raters. Ranked correlations between twins were calculated controlling for age and sex.ResultsTwin pairs were significantly correlated for facial expressions of general positive emotions, happiness, surprise and anger, but not for general negative emotions, sadness, or disgust or average emotional intensity. MZ pairs (n=18) were more correlated than DZ pairs (n=10) for most but not all emotional expressions.ConclusionsSince these twin pairs had minimal contact with each other prior to testing, these results support significant genetic effects on the facial display of at least some human emotions in response to standardized stimuli. The small sample size resulted in estimated twin correlations with very wide confidence intervals.


Sign in / Sign up

Export Citation Format

Share Document