scholarly journals Does Prosopagnosia Take the Eyes Out of Face Representations? Evidence for a Defect in Representing Diagnostic Facial Information following Brain Damage

2005 ◽  
Vol 17 (10) ◽  
pp. 1652-1666 ◽  
Author(s):  
Roberto Caldara ◽  
Philippe Schyns ◽  
Eugéne Mayer ◽  
Marie L. Smith ◽  
Frédéric Gosselin ◽  
...  

One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 67-67 ◽  
Author(s):  
H Hill ◽  
R Watt

The first task of any face processing system is detection of the face. We studied how the human visual system achieves face detection using a 2AFC task in which subjects were required to detect a face in the image of a natural scene. Luminance noise was added to the stimuli and performance was measured as a function of orientation and orientation bandwidth of the noise. Sensitivity levels and the effects of orientation bandwidth were similar for horizontally and vertically oriented noise. Performance was reduced for the smallest orientation bandwidth (5.6°) noise but sensitivity did not decrease further with increasing bandwidth until a point between 45° and 90°. The results suggest that important information may be oriented close to the vertical and horizontal. To test whether the results were specific to the task of face detection the same noise was added to the images in a man-made natural decision task. Equivalent levels of noise were found to be more disruptive and the effect of orientation bandwidth was different. The results are discussed in terms of models of face processing making use of oriented filters (eg Watt and Dakin, 1993 Perception22 Supplement, 12) and local energy models of feature detection (Morrone and Burr, 1988 Proceedings of the Royal Society of London B235 221 – 245).


1996 ◽  
Vol 2 (3) ◽  
pp. 240-248 ◽  
Author(s):  
Michael R. Polster ◽  
Steven Z. Rapcsak

AbstractWe report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given “shallow” encoding instructions to focus on facial features. By contrast, he performs relatively well with “deep” encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations. (JINS, 1996, 2, 240–248.)


1998 ◽  
Vol 06 (03) ◽  
pp. 281-298 ◽  
Author(s):  
Terry Huntsberger ◽  
John Rose ◽  
Shashidhar Ramaka

The human face is one of the most important patterns our visual system receives. It establishes a person's identity and also plays a significant role in everyday communication. Humans can recognize familiar faces under varying lighting conditions, different scales, and even after the face has changed due to aging, hair style, glasses, or facial hair. Our ease at recognizing faces is a strong motivation for the investigation of computational models of face processing. This paper presents a newly developed face processing system called Fuzzy-Face that combines wavelet pre-processing of input with a fuzzy self-organizing feature map algorithm. The wavelet-derived face space is partitioned into fuzzy sets which are characterized by face exemplars and membership values to those exemplars. This system learns faces using relatively few training epochs, has total recall for faces it has been shown, generalizes to face images that are acquired under different lighting conditions, and has rudimentary gender discrimination capabilities. We also include the results of some experimental studies.


2021 ◽  
Author(s):  
Zuzanna Laudańska ◽  
Aleksandra Dopierała ◽  
Magdalena Szmytke ◽  
Dianna Ilyka ◽  
Anna Malinowska-Korczak ◽  
...  

Abstract Configural processing is a specialised perceptual mechanism that allows adult humans to quickly process facial information. It emerges before the first birthday and can be disrupted by upside-down presentation of the face (inversion). To date, little is known about the relationship of configural face processing to the emerging knowledge of audiovisual (AV) speech in infancy. Using eye-tracking we measured attention to speaking mouth in upright and inverted faces that were either congruent or incongruent with the speech sound. Face inversion affected looking at AV speech only in older infants (9- to 11- and 12- to 14-month-olds). The youngest group of infants (5- to 7-month-olds) did not show any differences in looking durations between upright and inverted faces, while in both older groups face inversion led to reduced looking at the articulating mouth. We also observed a stronger interest in the eyes in the youngest infants, followed by an increase in looking time to the mouth in both older groups. Our findings suggest that configural face processing is involved in AV speech processing already in infancy, indicating early integration of face and speech processing mechanisms in cognitive development.


1990 ◽  
Vol 3 (3) ◽  
pp. 153-168 ◽  
Author(s):  
Andrew W. Young ◽  
Hadyn D. Ellis ◽  
T. Krystyna Szulecka ◽  
Karel W. De Pauw

We report detailed investigations of the face processing abilities of four patients who had shown symptoms involving delusional misidentification. One (GC) was diagnosed as a Frégoli case, and the other three (SL, GS, and JS) by symptoms of intermetamorphosis. The face processing tasks examined their ability to recognize emotional facial expressions, identify familiar faces, match photographs of unfamiliar faces, and remember photographs of faces of unfamiliar people. The Frégoli patient (GC) was impaired at identifying familiar faces, and severely impaired at matching photographs of unfamiliar people wearing different disguises to undisguised views. Two of the intermetamorphosis patients (SL and GS) also showed impaired face processing abilities, but the third US) performed all tests at a normal level. These findings constrain conceptions of the relation between delusional misidentification, face processing impairment, and brain injury.


2019 ◽  
Vol 40 (9) ◽  
pp. 2581-2595 ◽  
Author(s):  
Daniel B. Elbich ◽  
Peter C.M. Molenaar ◽  
K. Suzanne Scherf

1998 ◽  
Vol 51 (3) ◽  
pp. 449-473 ◽  
Author(s):  
Helmut Leder ◽  
Vicki Bruce

Distinctiveness contributes strongly to the recognition and rejection of faces in memory tasks. In four experiments we examine the role played by local and relational information in the distinctiveness of upright and inverted faces. In all experiments subjects saw one of three versions of a face: original faces, which had been rated as average in distinctiveness in a previous study (Hancock, Burton, & Bruce, 1996), a more distinctive version in which local features had been changed ( D-local), and a more distinctive version in which relational features had been changed ( D-rel). An increase in distinctiveness was found for D-local and D-rel faces in Experiment 1 (complete faces) and 3 and 4 (face internals only) when the faces had to be rated in upright presentation, but the distinctiveness of the D-rel faces was reduced much more than that of the D-local versions when the ratings were given to the faces presented upside-down (Experiments 1 and 3). Recognition performance showed a similar pattern: presented upright, both D-local and D-rel revealed higher performance compared to the originals, but in upside-down presentation the D-local versions showed a much stronger distinctiveness advantage. When only internal features of faces were used (Experiments 3 and 4), the D-rel faces lost their advantage over the Original versions in inverted presentation. The results suggest that at least two dimensions of facial information contribute to a face's apparent distinctiveness, but that these sources of information are differentially affected by turning the face upside-down. These findings are in accordance with a face processing model in which face inversion effects occur because a specific type of information processing is disrupted, rather than because of a general disruption of performance.


2021 ◽  
Vol 11 (7) ◽  
pp. 942
Author(s):  
Antonio Maffei ◽  
Jennifer Goertzen ◽  
Fern Jaspers-Fayer ◽  
Killian Kleffner ◽  
Paola Sessa ◽  
...  

Behavioral and electrophysiological correlates of the influence of task demands on the processing of happy, sad, and fearful expressions were investigated in a within-subjects study that compared a perceptual distraction condition with task-irrelevant faces (e.g., covert emotion task) to an emotion task-relevant categorization condition (e.g., overt emotion task). A state-of-the-art non-parametric mass univariate analysis method was used to address the limitations of previous studies. Behaviorally, participants responded faster to overtly categorized happy faces and were slower and less accurate to categorize sad and fearful faces; there were no behavioral differences in the covert task. Event-related potential (ERP) responses to the emotional expressions included the N170 (140–180 ms), which was enhanced by emotion irrespective of task, with happy and sad expressions eliciting greater amplitudes than neutral expressions. EPN (200–400 ms) amplitude was modulated by task, with greater voltages in the overt condition, and by emotion, however, there was no interaction of emotion and task. ERP activity was modulated by emotion as a function of task only at a late processing stage, which included the LPP (500–800 ms), with fearful and sad faces showing greater amplitude enhancements than happy faces. This study reveals that affective content does not necessarily require attention in the early stages of face processing, supporting recent evidence that the core and extended parts of the face processing system act in parallel, rather than serially. The role of voluntary attention starts at an intermediate stage, and fully modulates the response to emotional content in the final stage of processing.


2003 ◽  
Vol 15 (3) ◽  
pp. 462-474 ◽  
Author(s):  
Galit Yovel ◽  
Jerre Levy ◽  
Marcia Grabowecky ◽  
Ken A. Paller

Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).


Sign in / Sign up

Export Citation Format

Share Document