Anti-expressions: Artificial control stimuli for the visual properties of emotional facial expressions

2009 ◽  
Vol 37 (4) ◽  
pp. 491-501 ◽  
Author(s):  
Wataru Sato ◽  
Sakiko Yoshikawa

The perceptual/cognitive processing for emotional facial expressions is effective compared to that for neutral facial expressions. To investigate whether this effectiveness can be attributed to the expression of emotion or to the visual properties of the facial expressions, we used computer morphing to develop a form of control stimuli. These "anti-expressions" changed the features in emotional facial expressions in the opposite direction from neutral expressions by amounts equivalent to the differences between emotional and neutral expressions. To examine if anti-expressions are usable as emotionally neutral faces, 35 participants were asked to categorize and rate the valence and arousal dimensions of six basic emotions for normal and anti-expressions. The results indicate that anti-expressions were assessed as neutral for anger, disgust, fear, and happiness, and these can be used as control stimuli in emotional facial expressions regarding visual properties.

2016 ◽  
Author(s):  
Martial Mermillod ◽  
Delphine Grynberg ◽  
Magdalena Rychlowska ◽  
Nicolas Vermeulen ◽  
Paula M. Niedenthal ◽  
...  

AbstractIn the past decade, different studies have suggested that high-order factors could influence the perceptual processing of emotional stimuli. In this study, we aimed to evaluate the effect of congruent vs. incongruent social information (positive, negative or no information related to the character of the target) on subjective (perceived and felt valence and arousal), physiological (facial mimicry) as well as on neural (P100 and N170) responses to dynamic emotoional facial expressions (EFE) that varied from neutral to one of the six basic emotions. Across three studies, the results showed (1) reduced valence and arousal evaluation of EFE when associated with incongruent social information (Study 1), (2) increased electromyographical responses (Study 2) and significant modulation of P100 and N170 components (Study 3) when EFE were associated with social (positive and negative) information (vs. no information). These studies revealed that positive or negative social information reduced subjective responses to incongruent EFE and produces a similar neural and physiological boost of the early perceptual processing of EFE irrespective of their congruency. In conclusion, this study suggested that social context (positive or negative) enhances the necessity to be alert to any subsequent cues.


2005 ◽  
Vol 100 (1) ◽  
pp. 129-134 ◽  
Author(s):  
Michela Balconi

The present research compared the semantic information processing of linguistic stimuli with semantic elaboration of nonlinguistic facial stimuli. To explore brain potentials (ERPs, event-related potentials) related to decoding facial expressions and the effect of semantic valence of the stimulus, we analyzed data for 20 normal subjects ( M age = 23.6 yr., SD = 0.2). Faces with three basic emotional expressions (fear, happiness, and sadness from the 1976 Ekman and Friesen database), with three semantically anomalous expressions (with respect to their emotional content), and the neutral stimuli (face without an emotional content) were presented in a random order. Differences in peak amplitude of ERP were observed later for anomalous expressions compared with congruous expressions. In fact, the results demonstrated that the emotional anomalous faces elicited a higher negative peak at about 360 msec., distributed mainly over the posterior sites. The observed electrophysiological activity may represent specific cognitive processing underlying the comprehension of facial expressions in detection of semantic anomaly. The evidence is in favour of comparability of this negative deflection with the N400 ERP effect elicited by linguistic anomalies.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2021 ◽  
Vol 12 ◽  
Author(s):  
Agnes Bohne ◽  
Dag Nordahl ◽  
Åsne A. W. Lindahl ◽  
Pål Ulvenes ◽  
Catharina E. A. Wang ◽  
...  

Processing of emotional facial expressions is of great importance in interpersonal relationships. Aberrant engagement with facial expressions, particularly an engagement with sad faces, loss of engagement with happy faces, and enhanced memory of sadness has been found in depression. Since most studies used adult faces, we here examined if such biases also occur in processing of infant faces in those with depression or depressive symptoms. In study 1, we recruited 25 inpatient women with major depression and 25 matched controls. In study 2, we extracted a sample of expecting parents from the NorBaby study, where 29 reported elevated levels of depressive symptoms, and 29 were matched controls. In both studies, we assessed attentional bias with a dot-probe task using happy, sad and neutral infant faces, and facial memory bias with a recognition task using happy, sad, angry, afraid, surprised, disgusted and neutral infant and adult faces. Participants also completed the Ruminative Responses Scale and Becks Depression Inventory-II. In study 1, we found no group difference in either attention to or memory accuracy for emotional infant faces. Neither attention nor recognition was associated with rumination. In study 2, we found that the group with depressive symptoms disengaged more slowly than healthy controls from sad infant faces, and this was related to rumination. The results place emphasis on the importance of emotional self-relevant material when examining cognitive processing in depression. Together, these studies demonstrate that a mood-congruent attentional bias to infant faces is present in expecting parents with depressive symptoms, but not in inpatients with Major Depression Disorder who do not have younger children.


2011 ◽  
Vol 26 (S2) ◽  
pp. 707-707
Author(s):  
X. Luo ◽  
R. Chen ◽  
W. Guo ◽  
H. Zhang ◽  
R. Zhou

IntroductionMost previous researches indicated that impaired inhibition to emotional stimuli could be one of the important cognitive characteristics of depression individuals. The antisaccade tasks which composed of prosaccade task (PS) and antisaccade task (AS) were often used to investigate response inhibition.AimsThis study aimed to investigate the volition inhibition toward emotional stimuli in depressed mood undergraduates (DM).MethodsSubjects were grouped as 21 DM and 25 non-depressed undergraduates (ND) on the Beck Depression Inventory and Self-rating Depression Scale. The antisaccade tasks were conducted to examine the inhibition abilities by varying the arousal level of volition (low and high) of the tasks, with happy, neutral and sad facial expressions as stimuli.ResultsThe results showed that at the low volition level in the AS condition, the correct saccade latency in the DM were significant slower than the ND; The DM had reliable higher direction error rates in response to emotional facial expressions, especially for sad expressions. However, all of the differences disappeared in the high volition level antisaccade tasks. The amplitude errors data were not influenced by emotional facial expressions, and there were no group differences across tasks.ConclusionsThese results indicated the DM showed slower speed of cognitive processing and impaired inhibition abilities toward emotional faces than the ND, particularly for sad faces, but these abilities will be repaired in the high arousal level of volition, which enlighten us that training the DM's volition level of inhibition could prove to be an effective strategy to alleviate depression.


2019 ◽  
Vol 33 (4) ◽  
pp. 254-266
Author(s):  
Jonathan W. L. Kettle ◽  
Nicholas B. Allen

Abstract. Patterns of facial reactivity and attentional allocation to emotional facial expressions, and how these are moderated by gaze direction, are not clearly established. Among a sample of undergraduate university students, aged between 17 and 22 years (76% female), corrugator and zygomatic reactivity, as measured by facial electromyography, and attention allocation, as measured by the startle reflex and startle-elicited N100, was examined while viewing happy, neutral, angry and fearful facial expressions, which were presented at either 0- or 30-degree gaze. Results indicated typically observed facial mimicry to happy faces but, unexpectedly, “smiling” facial responses to fearful, and to a lesser extent, angry faces. This facial reactivity was not influenced by gaze direction. Furthermore, emotional facial expressions did not elicit increased attentional allocation. Likewise, matched facial expressions did not elicit increased attentional allocation. Rather, happy and fearful faces with direct (0°) gaze elicited increased controlled attentional allocation, and averted (30°) gaze faces, regardless of emotional expression, elicited preferential, early cortical processing. These findings suggest typical facial mimicry to happy faces, but unexpected facial reactivity to angry and fearful faces, perhaps due to an attempt to regulate social bonds during threat perception. Findings also suggest a divergence in controlled versus preferential, early cortical attentional processing for direct compared to averted gaze faces. These findings relate to young, mostly female, adults attending university. The experiment should be repeated with a larger sample drawn from the general community, with a broader age range and gender balance, and with a stimulus set with validated subjective valence and arousal ratings. This can reduce Type II error and establish normative patterns of facial reactivity and attentional processing of emotional facial expressions with different gaze directions.


2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.


2020 ◽  
Vol 7 (3) ◽  
pp. 191715
Author(s):  
Akie Saito ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Previous experimental psychology studies based on visual search paradigms have reported that young adults detect emotional facial expressions more rapidly than emotionally neutral expressions. However, it remains unclear whether this holds in older adults. We investigated this by comparing the abilities of young and older adults to detect emotional and neutral facial expressions while controlling the visual properties of faces presented (termed anti-expressions) in a visual search task. Both age groups detected normal angry faces more rapidly than anti-angry faces. However, whereas young adults detected normal happy faces more rapidly than anti-happy faces, older adults did not. This suggests that older adults may not be easy to detect or focusing attention towards smiling faces appearing peripherally.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 28-28
Author(s):  
A J Calder ◽  
A W Young ◽  
D Rowland ◽  
D R Gibbenson ◽  
B M Hayes ◽  
...  

G Rhodes, S E Brennan, S Carey (1987 Cognitive Psychology19 473 – 497) and P J Benson and D I Perrett (1991 European Journal of Cognitive Psychology3 105 – 135) have shown that computer-enhanced (caricatured) representations of familiar faces are named faster and rated as better likenesses than veridical (undistorted) representations. Here we have applied Benson and Perrett's graphic technique to examine subjects' perception of enhanced representations of photographic-quality facial expressions of basic emotions. To enhance a facial expression the target face is compared to a norm or prototype face, and, by exaggerating the differences between the two, a caricatured image is produced; reducing the differences results in an anticaricatured image. In experiment 1 we examined the effect of degree of caricature and types of norm on subjects' ratings for ‘intensity of expression’. Three facial expressions (fear, anger, and sadness) were caricatured at seven levels (−50%, −30%, −15%, 0%, +15%, +30%, and +50%) relative to three different norms; (1) an average norm prepared by blending pictures of six different emotional expressions; (2) a neutral expression norm; and (3) a different expression norm (eg anger caricatured relative to a happy expression). Irrespective of norm, the caricatured expressions were rated as significantly more intense than the veridical images. Furthermore, for the average and neutral norm sets, the anticaricatures were rated as significantly less intense. We also examined subjects' reaction times to recognise caricatured (−50%, 0%, and +50%) representations of six emotional facial expressions. The results showed that the caricatured images were identified fastest, followed by the veridical, and then anticaricatured images. Hence the perception of facial expression and identity is facilitated by caricaturing; this has important implications for the mental representation of facial expressions.


2019 ◽  
Vol 12 (4) ◽  
pp. 235-250 ◽  
Author(s):  
Ashley L. Ruba ◽  
Betty M. Repacholi

An ongoing debate in affective science concerns whether certain discrete, “basic” emotions have evolutionarily based signals (facial expressions) that are easily, universally, and (perhaps) innately identified. Studies with preverbal infants (younger than 24 months) have the potential to shed light on this debate. This review summarizes what is known about preverbal infants’ understanding of discrete emotional facial expressions. Overall, while many studies suggest that preverbal infants differentiate positive and negative facial expressions, few studies have tested whether infants understand discrete emotions (e.g., anger vs. disgust). Moreover, results vary greatly based on methodological factors. This review also (a) discusses how language may influence the development of emotion understanding, and (b) proposes a new developmental hypothesis for infants’ discrete emotion understanding.


Sign in / Sign up

Export Citation Format

Share Document