scholarly journals The influence of spatial location on same-different judgments of facial identity and expression

2020 ◽  
Author(s):  
Maurryce Starks ◽  
Anna Shafer-Skelton ◽  
Michela Paradiso ◽  
Aleix M. Martinez ◽  
Julie Golomb

The “spatial congruency bias” is a behavioral phenomenon where two objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb et al., 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, two real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgements of facial identity, yet a more fragile one for judgements of facial expression. Subjects were more likely to judge two faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgements on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location.

1989 ◽  
Vol 68 (2) ◽  
pp. 443-452 ◽  
Author(s):  
Patricia T. Riccelli ◽  
Carol E. Antila ◽  
J. Alexander Dale ◽  
Herbert L. Klions

Two studies concerned the relation between facial expression cognitive induction of mood and perception of mood in women undergraduates. In Exp. 1, 20 subjects were randomly assigned to a group who were instructed in exaggerated facial expressions (Demand Group) and 20 subjects were randomly assigned to a group who were not instructed (Nondemand Group). All subjects completed a modified Velten (1968) elation- and depression-induction sequence. Ratings of depression on the Multiple Affect Adjective Checklist increased during the depression condition and decreased during the elation condition. Subjects made more facial expressions in the Demand Group than the Nondemand Group from electromyogram measures of the zygomatic and corrugator muscles and from corresponding action unit measures from visual scoring using the Facial Action Scoring System. Subjects who were instructed in the Demand Group rated their depression as more severe during the depression slides than the other group. No such effect was noted during the elation condition. In Exp. 2, 16 women were randomly assigned to a group who were instructed in facial expressions contradictory to those expected on the depression and elation tasks (Contradictory Expression Group). Another 16 women were randomly assigned to a group who were given no instructions about facial expressions (Nondemand Group). All subjects completed the depression- and elation-induction sequence mentioned in Exp. 1. No differences were reported between groups on the ratings of depression (MAACL) for the depression-induction or for the elation-induction but both groups rated depression higher after the depression condition and lower after the elation condition. Electromyographic and facial action scores verified that subjects in the Contradictory Expression Group were making the requested contradictory facial expressions during the mood-induction sequences. It was concluded that the primary influence on emotion came from the cognitive mood-induction sequences. Facial expressions only seem to modify the emotion in the case of depression being exacerbated by frowning. A contradictory facial expression did not affect the rating of an emotion.


2006 ◽  
Author(s):  
Lisa M. Durrance ◽  
Benjamin A. Clegg ◽  
Edward L. Delosh

2012 ◽  
Vol 25 (1) ◽  
pp. 105-110 ◽  
Author(s):  
Yohko Maki ◽  
Hiroshi Yoshida ◽  
Tomoharu Yamaguchi ◽  
Haruyasu Yamaguchi

ABSTRACTBackground:Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors.Methods:Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels.Results:In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions.Conclusions:In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.


Author(s):  
Haitao Tang ◽  
Mari Korkea-aho ◽  
Jose Costa-Requena ◽  
Jussi Ruutu

2019 ◽  
Vol 30 (10) ◽  
pp. 1497-1509 ◽  
Author(s):  
Surya Gayet ◽  
Marius V. Peelen

When searching for relevant objects in our environment (say, an apple), we create a memory template (a red sphere), which causes our visual system to favor template-matching visual input (applelike objects) at the expense of template-mismatching visual input (e.g., leaves). Although this principle seems straightforward in a lab setting, it poses a problem in naturalistic viewing: Two objects that have the same size on the retina will differ in real-world size if one is nearby and the other is far away. Using the Ponzo illusion to manipulate perceived size while keeping retinal size constant, we demonstrated across 71 participants that visual objects attract attention when their perceived size matches a memory template, compared with mismatching objects that have the same size on the retina. This shows that memory templates affect visual selection after object representations are modulated by scene context, thus providing a working mechanism for template-based search in naturalistic vision.


Sign in / Sign up

Export Citation Format

Share Document