Face Classification in Schizophrenia: Evidence for a Sensitivity to Distinctiveness

Perception ◽  
10.1068/p6291 ◽  
2009 ◽  
Vol 38 (5) ◽  
pp. 702-707
Author(s):  
Robert A Johnston ◽  
Eleanor Tomlinson ◽  
Chris Jones ◽  
Alan Weaden

The face-processing skills of people with schizophrenia were compared with those of a group of unimpaired individuals. Participants were asked to make speeded face-classification decisions to faces previously rated as being typical or distinctive. The schizophrenic group responded more slowly than the unimpaired group; however, both groups demonstrated the customary sensitivity to the distinctiveness of the face stimuli. Face-classification latencies made to typical faces were shorter than those made to distinctive faces. The implication of this finding with the schizophrenic group is discussed with reference to accounts of face-processing deficits attributed to these individuals.

2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.


2020 ◽  
Author(s):  
Gillian M Clark ◽  
Claire McNeel ◽  
Felicity J Bigelow ◽  
Peter Gregory Enticott

The investigation of emotional face processing has largely used faces devoid of context, and does not account for within-perceiver differences in empathy. The importance of context in face perception has become apparent in recent years. This study examined the interaction of the contextual factors of facial expression, knowledge of a person’s character, and within-perceiver empathy levels on face processing event-related potentials (ERPs). Forty-two adult participants learned background information about six individuals’ character. Three types of character were described, in which the character was depicted as deliberately causing harm to others, accidentally causing harm to others, or undertaking neutral actions. Subsequently, EEG was recorded while participants viewed the characters’ faces displaying neutral or emotional expressions. Participants’ empathy was assessed using the Empathy Quotient survey. Results showed a significant interaction of character type and empathy on the early posterior negativity (EPN) ERP component. These results suggested that for those with either low or high empathy, more attention was paid to the face stimuli, with more distinction between the different characters. In contrast, those in the middle range of empathy tended to produce smaller EPN with less distinction between character types. Findings highlight the importance of trait empathy in accounting for how faces in context are perceived.


2016 ◽  
Author(s):  
Haruo Hosoya ◽  
Aapo Hyvärinen

AbstractExperimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.


Author(s):  
Galit Yovel

As social primates, one of the most important cognitive tasks we conduct, dozens of times a day, is to look at a face and extract the person's identity. During the last decade, the neural basis of face processing has been extensively investigated in humans with event-related potential (ERP) and functional MRI (fMRI). These two methods provide complementary information about the temporal and spatial aspects of the neural response, with ERPs allowing high temporal resolution of milliseconds but low spatial resolution of the neural generator and fMRI displaying a slow hemodynamic response but better spatial localization of the activated regions. Despite the extensive fMRI and ERP research of faces, only a few studies have assessed the relationship between the two methods and no study to date have collected simultaneous ERP and fMRI responses to face stimuli. In the current paper we will try to assess the spatial and temporal aspects of the neural response to faces by simultaneously collecting functional MRI and event-related potentials (ERP) to face stimuli. Our goals are twofold: 1) ERP and fMRI show a robust selective response to faces. In particular, two well-established face-specific phenomena, the RH superiority and the inversion effect are robustly found with both ERP and fMRI. Despite the extensive research of these effects with ERP and fMRI, it is still unknown to what extent their spatial (fMRI) and temporal (ERP) aspects are associated. In Study 1 we will employ an individual differences approach, to assess the relationship between these ERP and fMRI face-specific responses. 2) Face processing involves several stages starting from structural encoding of the face image through identity processing to storage for later retrieval. This representation undergoes several manipulations that take place at different time points and in different brain regions before the final percept is generated. By simultaneously recording ERP and fMRI we hope to gain a more comprehensive understanding of the timecourse that different brain areas participate in the generation of the face representation.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110563
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Recognizing familiar faces requires a comparison of the incoming perceptual information with mental face representations stored in memory. Mounting evidence indicates that these representations adapt quickly to recently perceived facial changes. This becomes apparent in face adaptation studies where exposure to a strongly manipulated face alters the perception of subsequent face stimuli: original, non-manipulated face images then appear to be manipulated, while images similar to the adaptor are perceived as “normal.” The face adaptation paradigm serves as a good tool for investigating the information stored in facial memory. So far, most of the face adaptation studies focused on configural (second-order relationship) face information, mainly neglecting non-configural face information (i.e., that does not affect spatial face relations), such as color, although several (non-adaptation) studies were able to demonstrate the importance of color information in face perception and identification. The present study therefore focuses on adaptation effects on saturation color information and compares the results with previous findings on brightness. The study reveals differences in the effect pattern and robustness, indicating that adaptation effects vary considerably even within the same class of non-configural face information.


Sign in / Sign up

Export Citation Format

Share Document