scholarly journals Recognising Facial Expression from Spatially and Temporally Modified Movements

Perception ◽  
10.1068/p3319 ◽  
2003 ◽  
Vol 32 (7) ◽  
pp. 813-826 ◽  
Author(s):  
Frank E Pollick ◽  
Harold Hill ◽  
Andrew Calder ◽  
Helena Paterson

We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.



2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.



2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.



2009 ◽  
Vol 364 (1535) ◽  
pp. 3497-3504 ◽  
Author(s):  
Ursula Hess ◽  
Reginald B. Adams ◽  
Robert E. Kleck

Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.



2004 ◽  
Vol 15 (1-2) ◽  
pp. 23-34 ◽  
Author(s):  
Manas K. Mandal ◽  
Nalini Ambady

Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.



2021 ◽  
Author(s):  
Efe Soyman ◽  
Rune Bruls ◽  
Kalliopi Ioumpa ◽  
Laura Müller-Pinzler ◽  
Selene Gallo ◽  
...  

Based on neuroimaging data, the insula is considered important for people to empathize with the pain of others, whether that pain is perceived through facial expressions or the sight of limbs in painful situations. Here we present the first report of intracortical electroencephalographic (iEEG) recordings from the insulae collected while 7 presurgical epilepsy patients rated the intensity of a woman's painful experiences viewed in movies. In two separate conditions, pain was deduced from seeing facial expressions or a hand being slapped by a belt. We found that broadband activity in the 20-190 Hz range correlated with the trial-by-trial perceived intensity in the insula for both types of stimuli. Using microwires at the tip of a selection of the electrodes, we additionally isolated 8 insular neurons with spiking that correlated with perceived intensity. Within the insula, we found a patchwork of locations with differing selectivities within our stimulus set, some representing intensity only for facial expressions, others only for the hand being hit, and others for both. That we found some locations with intensity coding only for faces, and others only for hand across simultaneously recorded locations suggests that insular activity while witnessing the pain of others cannot be entirely reduced to a univariate salience representation. Psychophysics and the temporal properties of our signals indicate that the timing of responses encoding intensity for the sight of the hand being hit are best explained by kinematic information; the timing of those encoding intensity for the facial expressions are best explained by shape information in the face. In particular, the furrowing of the eyebrows and the narrowing of the eyes of the protagonist in the movies suffice to predict both the rating of and the timing of the neuronal response to the facial expressions. Comparing the broadband activity in the iEEG signal with spiking activity and an fMRI experiment with similar stimuli revealed a consistent spatial organization for the representation of intensity from our hand stimuli, with stronger intensity representation more anteriorly and around neurons with intensity coding. In contrast, for the facial expressions, we found that the activity at the three levels of measurement do not coincide, suggesting a more disorganized representation. Together, our intracortical recordings indicate that the insula encodes, in a partially intermixed layout, both static and dynamic cues from different body parts that reflect the intensity of pain experienced by others.



Author(s):  
Connor T. Keating ◽  
Dagmar S. Fraser ◽  
Sophie Sowden ◽  
Jennifer L. Cook

AbstractTo date, studies have not established whether autistic and non-autistic individuals differ in emotion recognition from facial motion cues when matched in terms of alexithymia. Here, autistic and non-autistic adults (N = 60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions manipulated in terms of speed and spatial exaggeration. Autistic participants exhibited significantly lower accuracy for angry, but not happy or sad, facial motion with unmanipulated speed and spatial exaggeration. Autistic, and not alexithymic, traits were predictive of accuracy for angry facial motion with unmanipulated speed and spatial exaggeration. Alexithymic traits, in contrast, were predictive of the magnitude of both correct and incorrect emotion ratings.



2008 ◽  
Vol 25 (4) ◽  
pp. 603-609 ◽  
Author(s):  
MURIEL BOUCART ◽  
JEAN-FRANÇOIS DINON ◽  
PASCAL DESPRETZ ◽  
THOMAS DESMETTRE ◽  
KATRINE HLADIUK ◽  
...  

AbstractAge-related macular degeneration (AMD) is a major cause of visual impairment in people older than 50 years in Western countries, affecting essential tasks such as reading and face recognition. Here we investigated the mechanisms underlying the deficit in recognition of facial expressions in an AMD population with low vision. Pictures of faces displaying different emotions with the mouth open or closed were centrally displayed for 300 ms. Participants with AMD with low acuity (mean 20/200) and normally sighted age-matched controls performed one of two emotion tasks: detecting whether a face had an expression or not (expressive/non expressive (EXNEX) task) or categorizing the facial emotion as happy, angry, or neutral (categorization of expression (CATEX) task). Previous research has shown that healthy observers are mainly using high spatial frequencies in an EXNEX task while performance at a CATEX task was preferentially based on low spatial frequencies. Due to impaired processing of high spatial frequencies in central vision, we expected and observed that AMD participants failed at deciding whether a face was expressive or not but categorized normally the emotion of the face (e.g., happy, angry, neutral). Moreover, we observed that AMD participants mostly identified emotions using the lower part of the face (mouth). Accuracy did not differ between the two tasks for normally sighted observers. The results indicate that AMD participants are able to identify facial emotion but must base their decision mainly on the low spatial frequencies, as they lack the perception of finer details.



2021 ◽  
Vol 12 ◽  
Author(s):  
Juliana Gioia Negrão ◽  
Ana Alexandra Caldas Osorio ◽  
Rinaldo Focaccia Siciliano ◽  
Vivian Renne Gerber Lederman ◽  
Elisa Harumi Kozasa ◽  
...  

Background: This study developed a photo and video database of 4-to-6-year-olds expressing the seven induced and posed universal emotions and a neutral expression. Children participated in photo and video sessions designed to elicit the emotions, and the resulting images were further assessed by independent judges in two rounds.Methods: In the first round, two independent judges (1 and 2), experts in the Facial Action Coding System, firstly analysed 3,668 emotions facial expressions stimuli from 132 children. Both judges reached 100% agreement regarding 1,985 stimuli (124 children), which were then selected for a second round of analysis between judges 3 and 4.Results: The result was 1,985 stimuli (51% of the photographs) were produced from 124 participants (55% girls). A Kappa index of 0.70 and an accuracy of 73% between experts were observed. Lower accuracy was found for emotional expression by 4-year-olds than 6-year-olds. Happiness, disgust and contempt had the highest agreement. After a sub-analysis evaluation of all four judges, 100% agreement was reached for 1,381 stimuli which compound the ChildEFES database with 124 participants (59% girls) and 51% induced photographs. The number of stimuli of each emotion were: 87 for neutrality, 363 for happiness, 170 for disgust, 104 for surprise, 152 for fear, 144 for sadness, 157 for anger 157, and 183 for contempt.Conclusions: The findings show that this photo and video database can facilitate research on the mechanisms involved in early childhood recognition of facial emotions in children, contributing to the understanding of facial emotion recognition deficits which characterise several neurodevelopmental and psychiatric disorders.



2020 ◽  
Author(s):  
Fernando Ferreira-Santos ◽  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

The behavioral and electrophysiological study of the emotional intensity of facial expressions of emotions has relied on image processing techniques termed ‘morphing’ to generate realistic facial stimuli in which emotional intensity can be manipulated. This is achieved by blending neutral and emotional facial displays and treating the percent of morphing between the two stimuli as an objective measure of emotional intensity. Here we argue that the percentage of morphing between stimuli does not provide an objective measure of emotional intensity and present supporting evidence from affective ratings and neural (event-related potential) responses. We show that 50% morphs created from high or moderate arousal stimuli differ in subjective and neural responses in a sensible way: 50% morphs are perceived as having approximately half of the emotional intensity of the original stimuli, but if the original stimuli differed in emotional intensity to begin with, then so will the morphs. We suggest a re-examination of previous studies that used percentage of morphing as a measure of emotional intensity and highlight the value of more careful experimental control of emotional stimuli and inclusion of proper manipulation checks.



Sign in / Sign up

Export Citation Format

Share Document