scholarly journals Age Similarity in Emotion Perception Based on Eye Gaze Manipulation

2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 455-456
Author(s):  
Yosra Abualula ◽  
Eric Allard

Abstract The purpose of this study was to examine age differences in emotion perception as a function of emotion type and gaze direction. Old and young adult participants were presented with facial images showing happiness, sadness, fear, anger and disgust while having their eyes tracked. The image stimuli included a manipulation of eye gaze. Half of the facial expressions had a directed eye gaze while the other half showed an averted gaze. A 2 (age) x 2 (gaze) x 5 (emotion) repeated measures ANOVA was used to analyze emotion perception scores and fixation to eye and mouth regions of the face. The manipulation of eye gaze yielded more age similarities than differences in emotion perception. Overall, we did not detect age differences in recognition ability. However, we found that certain emotion categories differentially impacted emotion perception. Interestingly, we observed that an averted gaze led to beneficial performance for fear and disgust faces. Additionally, participants spent more time fixating on the eye regions of sad facial expressions. We discuss how naturalistic manipulations of various facial features could impact age-related differences (or similarities) in emotion perception.

2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xinyuan Zhang ◽  
Mario Dalmaso ◽  
Luigi Castelli ◽  
Shimin Fu ◽  
Giovanni Galfano

AbstractThe averted gaze of others triggers reflexive attentional orienting in the corresponding direction. This phenomenon can be modulated by many social factors. Here, we used an eye-tracking technique to investigate the role of ethnic membership in a cross-cultural oculomotor interference study. Chinese and Italian participants were required to perform a saccade whose direction might be either congruent or incongruent with the averted-gaze of task-irrelevant faces belonging to Asian and White individuals. The results showed that, for Chinese participants, White faces elicited a larger oculomotor interference than Asian faces. By contrast, Italian participants exhibited a similar oculomotor interference effect for both Asian and White faces. Hence, Chinese participants found it more difficult to suppress eye-gaze processing of White rather than Asian faces. The findings provide converging evidence that social attention can be modulated by social factors characterizing both the face stimulus and the participants. The data are discussed with reference to possible cross-cultural differences in perceived social status.


1984 ◽  
Vol 1 ◽  
pp. 29-35
Author(s):  
Michael P. O'Driscoll ◽  
Barry L. Richardson ◽  
Dianne B. Wuillemin

Thirty photographs depicting diverse emotional expressions were shown to a sample of Melanesian students who were assigned to either a face plus context or face alone condition. Significant differences between the two groups were obtained in a substantial proportion of cases on Schlosberg's Pleasant Unpleasant, and Attention – Rejection scales and the emotional expressions were judged to be appropriate to the context. These findings support the suggestion that the presence or absence of context is an important variable in the judgement of emotional expression and lend credence to the universal process theory.Research on perception of emotions has consistently illustrated that observers can accurately judge emotions in facial expressions (Ekman, Friesen, & Ellsworth, 1972; Izard, 1971) and that the face conveys important information about emotions being experienced (Ekman & Oster, 1979). In recent years, however, a question of interest has been the relative contributions of facial cues and contextual information to observers' overall judgements. This issue is important for theoretical and methodological reasons. From a theoretical viewpoint, unravelling the determinants of emotion perception would enhance our understanding of the processes of person perception and impression formation and would provide a framework for research on interpersonal communication. On methodological grounds, the researcher's approach to the face versus context issue can influence the type of research procedures used to analyse emotion perception. Specifically, much research in this field has been criticized for use of posed emotional expressions as stimuli for observers to evaluate. Spignesi and Shor (1981) have noted that only one of approximately 25 experimental studies has utilized facial expressions occurring spontaneously in real-life situations.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sabrina N. Grondhuis ◽  
Angela Jimmy ◽  
Carolina Teague ◽  
Nicolas M. Brunet

Previous studies have found it is more difficult identifying an emotional expression displayed by an older than a younger face. It is unknown whether this is caused by age-related changes such as wrinkles and folds interfering with perception, or by the aging of facial muscles, potentially reducing the ability of older individuals to display an interpretable expression. To discriminate between these two possibilities, participants attempted to identify facial expressions under different conditions. To control for the variables (wrinkles/folds vs facial muscles), we used Generative Adversarial Networks to make faces look older or younger. Based upon behavior data collected from 28 individuals, our model predicts that the odds of correctly identifying the expressed emotion of a face reduced 16.2% when younger faces (condition 1) are artificially aged (condition 3). Replacing the younger faces with natural old-looking faces (Condition 2), however, results in an even stronger effect (odds of correct identification decreased by 50.9%). Counterintuitively, making old faces (Condition 2) look young (Condition 4) results in the largest negative effect (odds of correct identification decreased by 74.8% compared with natural young faces). Taken together, these results suggest that both age-related decline in the facial muscles’ ability to express facial emotions and age-related physical changes in the face, explain why it is difficult to recognize facial expressions from older faces; the effect of the former, however, is much stronger than that of the latter. Facial muscle exercises, therefore, might improve the capacity to convey facial emotional expressions in the elderly.


Author(s):  
Katie Hoemann ◽  
Ishabel M Vicaria ◽  
Maria Gendron ◽  
Jennifer Tehan Stanley

Abstract Objectives Previous research has uncovered age-related differences in emotion perception. To date, studies have relied heavily on forced-choice methods that stipulate possible responses. These constrained methods limit discovery of variation in emotion perception, which may be due to subtle differences in underlying concepts for emotion. Method We employed a face sort paradigm in which young (N = 42) and older adult (N = 43) participants were given 120 photographs portraying six target emotions (anger, disgust, fear, happiness, sadness, and neutral) and were instructed to create and label piles, such that individuals in each pile were feeling the same way. Results There were no age differences in number of piles created, nor in how well labels mapped onto the target emotion categories. However, older adults demonstrated lower consistency in sorting, such that fewer photographs in a given pile belonged to the same target emotion category. At the same time, older adults labeled piles using emotion words that were acquired later in development, and thus are considered more semantically complex. Discussion These findings partially support the hypothesis that older adults’ concepts for emotions and emotional expressions are more complex than those of young adults, demonstrate the utility of incorporating less constrained experimental methods into the investigation of age-related differences in emotion perception, and are consistent with existing evidence of increased cognitive and emotional complexity in adulthood.


2017 ◽  
Author(s):  
Hee Yeon Im ◽  
Reginald B. Adams ◽  
Cody A. Cushing ◽  
Jasmine Boshyan ◽  
Noreen Ward ◽  
...  

AbstractDuring face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer’s sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer’s sex.


2008 ◽  
Vol 25 (4) ◽  
pp. 603-609 ◽  
Author(s):  
MURIEL BOUCART ◽  
JEAN-FRANÇOIS DINON ◽  
PASCAL DESPRETZ ◽  
THOMAS DESMETTRE ◽  
KATRINE HLADIUK ◽  
...  

AbstractAge-related macular degeneration (AMD) is a major cause of visual impairment in people older than 50 years in Western countries, affecting essential tasks such as reading and face recognition. Here we investigated the mechanisms underlying the deficit in recognition of facial expressions in an AMD population with low vision. Pictures of faces displaying different emotions with the mouth open or closed were centrally displayed for 300 ms. Participants with AMD with low acuity (mean 20/200) and normally sighted age-matched controls performed one of two emotion tasks: detecting whether a face had an expression or not (expressive/non expressive (EXNEX) task) or categorizing the facial emotion as happy, angry, or neutral (categorization of expression (CATEX) task). Previous research has shown that healthy observers are mainly using high spatial frequencies in an EXNEX task while performance at a CATEX task was preferentially based on low spatial frequencies. Due to impaired processing of high spatial frequencies in central vision, we expected and observed that AMD participants failed at deciding whether a face was expressive or not but categorized normally the emotion of the face (e.g., happy, angry, neutral). Moreover, we observed that AMD participants mostly identified emotions using the lower part of the face (mouth). Accuracy did not differ between the two tasks for normally sighted observers. The results indicate that AMD participants are able to identify facial emotion but must base their decision mainly on the low spatial frequencies, as they lack the perception of finer details.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


2010 ◽  
Vol 24 (3) ◽  
pp. 186-197 ◽  
Author(s):  
Sandra J. E. Langeslag ◽  
Jan W. Van Strien

It has been suggested that emotion regulation improves with aging. Here, we investigated age differences in emotion regulation by studying modulation of the late positive potential (LPP) by emotion regulation instructions. The electroencephalogram of younger (18–26 years) and older (60–77 years) adults was recorded while they viewed neutral, unpleasant, and pleasant pictures and while they were instructed to increase or decrease the feelings that the emotional pictures elicited. The LPP was enhanced when participants were instructed to increase their emotions. No age differences were observed in this emotion regulation effect, suggesting that emotion regulation abilities are unaffected by aging. This contradicts studies that measured emotion regulation by self-report, yet accords with studies that measured emotion regulation by means of facial expressions or psychophysiological responses. More research is needed to resolve the apparent discrepancy between subjective self-report and objective psychophysiological measures.


2014 ◽  
Vol 28 (3) ◽  
pp. 148-161 ◽  
Author(s):  
David Friedman ◽  
Ray Johnson

A cardinal feature of aging is a decline in episodic memory (EM). Nevertheless, there is evidence that some older adults may be able to “compensate” for failures in recollection-based processing by recruiting brain regions and cognitive processes not normally recruited by the young. We review the evidence suggesting that age-related declines in EM performance and recollection-related brain activity (left-parietal EM effect; LPEM) are due to altered processing at encoding. We describe results from our laboratory on differences in encoding- and retrieval-related activity between young and older adults. We then show that, relative to the young, in older adults brain activity at encoding is reduced over a brain region believed to be crucial for successful semantic elaboration in a 400–1,400-ms interval (left inferior prefrontal cortex, LIPFC; Johnson, Nessler, & Friedman, 2013 ; Nessler, Friedman, Johnson, & Bersick, 2007 ; Nessler, Johnson, Bersick, & Friedman, 2006 ). This reduced brain activity is associated with diminished subsequent recognition-memory performance and the LPEM at retrieval. We provide evidence for this premise by demonstrating that disrupting encoding-related processes during this 400–1,400-ms interval in young adults affords causal support for the hypothesis that the reduction over LIPFC during encoding produces the hallmarks of an age-related EM deficit: normal semantic retrieval at encoding, reduced subsequent episodic recognition accuracy, free recall, and the LPEM. Finally, we show that the reduced LPEM in young adults is associated with “additional” brain activity over similar brain areas as those activated when older adults show deficient retrieval. Hence, rather than supporting the compensation hypothesis, these data are more consistent with the scaffolding hypothesis, in which the recruitment of additional cognitive processes is an adaptive response across the life span in the face of momentary increases in task demand due to poorly-encoded episodic memories.


Sign in / Sign up

Export Citation Format

Share Document