Retinotopy of Facial Expression Adaptation

2014 ◽  
Vol 27 (2) ◽  
pp. 127-137 ◽  
Author(s):  
Kazumichi Matsumiya

The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.


2021 ◽  
Author(s):  
◽  
Lauren Clare Bell

<p>Individuals with developmental prosopagnosia experience lifelong deficits recognising facial identity, but whether their ability to process facial expression is also impaired is unclear. Addressing this issue is key for understanding the core deficit in developmental prosopagnosia, and for advancing knowledge about the mechanisms and development of normal face processing. In this thesis, I report two online studies on facial expression processing with large samples of prosopagnosics. In Study 1, I compared facial expression and facial identity perception in 124 prosopagnosics and 133 controls. I used three perceptual tasks including simultaneous matching, sequential matching, and sorting. I also measured inversion effects to examine whether prosopagnosics rely on typical face mechanisms. Prosopagnosics showed subtle deficits with facial expression, but they performed worse with facial identity. Prosopagnosics also showed reduced inversion effects for facial identity but normal inversion effects for facial expression, suggesting they use atypical mechanisms for facial identity but normal mechanisms for facial expression. In Study 2, I extended the findings of Study 1 by assessing facial expression recognition in 78 prosopagnosics and 138 controls. I used four labelling tasks that varied on whether the facial expressions were basic (e.g., happy) or complex (e.g., elated), and whether they were displayed via static (i.e., images) or dynamic (i.e., video clips) stimuli. Prosopagnosics showed subtle deficits with basic expressions but performed normally with complex expressions. Further, prosopagnosics did not show reduced inversion effects for both types of expressions, suggesting they use similar recognition mechanisms as controls. Critically, the subtle expression deficits that prosopagnosics showed in both studies can be accounted for by autism traits, suggesting that expression deficits are not a feature of prosopagnosia per se. I also provide estimates of the prevalence of deficits in facial expression perception (7.70%) and recognition (2.56% - 5.13%) in prosopagnosia, both of which suggest that facial expression processing is normal in the majority of prosopagnosics. Overall, my thesis demonstrates that facial expression processing is not impaired in developmental prosopagnosia, and suggests that facial expression and facial identity processing rely on separate mechanisms that dissociate in development.</p>



2021 ◽  
Author(s):  
◽  
Lauren Clare Bell

<p>Individuals with developmental prosopagnosia experience lifelong deficits recognising facial identity, but whether their ability to process facial expression is also impaired is unclear. Addressing this issue is key for understanding the core deficit in developmental prosopagnosia, and for advancing knowledge about the mechanisms and development of normal face processing. In this thesis, I report two online studies on facial expression processing with large samples of prosopagnosics. In Study 1, I compared facial expression and facial identity perception in 124 prosopagnosics and 133 controls. I used three perceptual tasks including simultaneous matching, sequential matching, and sorting. I also measured inversion effects to examine whether prosopagnosics rely on typical face mechanisms. Prosopagnosics showed subtle deficits with facial expression, but they performed worse with facial identity. Prosopagnosics also showed reduced inversion effects for facial identity but normal inversion effects for facial expression, suggesting they use atypical mechanisms for facial identity but normal mechanisms for facial expression. In Study 2, I extended the findings of Study 1 by assessing facial expression recognition in 78 prosopagnosics and 138 controls. I used four labelling tasks that varied on whether the facial expressions were basic (e.g., happy) or complex (e.g., elated), and whether they were displayed via static (i.e., images) or dynamic (i.e., video clips) stimuli. Prosopagnosics showed subtle deficits with basic expressions but performed normally with complex expressions. Further, prosopagnosics did not show reduced inversion effects for both types of expressions, suggesting they use similar recognition mechanisms as controls. Critically, the subtle expression deficits that prosopagnosics showed in both studies can be accounted for by autism traits, suggesting that expression deficits are not a feature of prosopagnosia per se. I also provide estimates of the prevalence of deficits in facial expression perception (7.70%) and recognition (2.56% - 5.13%) in prosopagnosia, both of which suggest that facial expression processing is normal in the majority of prosopagnosics. Overall, my thesis demonstrates that facial expression processing is not impaired in developmental prosopagnosia, and suggests that facial expression and facial identity processing rely on separate mechanisms that dissociate in development.</p>



2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.



2018 ◽  
Vol 18 (10) ◽  
pp. 918
Author(s):  
Lauren Bell ◽  
Tirta Susilo


2019 ◽  
Vol 9 (5) ◽  
pp. 116 ◽  
Author(s):  
Luis Aguado ◽  
Karisa Parkington ◽  
Teresa Dieguez-Risco ◽  
José Hinojosa ◽  
Roxane Itier

Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.



2018 ◽  
Vol 29 (7) ◽  
pp. 3209-3219 ◽  
Author(s):  
Yuanning Li ◽  
R Mark Richardson ◽  
Avniel Singh Ghuman

Abstract Though the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity and in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior fusiform and midfusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and midfusiform showing a later and extended peak between 230 and 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior fusiform and midfusiform, with each contributing to temporally segregated stages of expression perception.



2010 ◽  
Vol 7 (9) ◽  
pp. 939-939
Author(s):  
L. Garrido ◽  
B. Duchaine


2018 ◽  
Author(s):  
Yuanning Li ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

AbstractThough the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity, in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior and mid-fusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and mid-fusiform showing a later and extended peak between 230 – 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior and mid-fusiform, with each contributing to temporally segregated stages of expression perception.



2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog



2015 ◽  
Vol 47 (1) ◽  
pp. 50
Author(s):  
Haibin WANG ◽  
Jiamei LU ◽  
Benxian YAO ◽  
Qingsong SANG ◽  
Ning CHEN ◽  
...  


Sign in / Sign up

Export Citation Format

Share Document