Representations in learning new faces: Evidence from prosopagnosia

1996 ◽  
Vol 2 (3) ◽  
pp. 240-248 ◽  
Author(s):  
Michael R. Polster ◽  
Steven Z. Rapcsak

AbstractWe report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given “shallow” encoding instructions to focus on facial features. By contrast, he performs relatively well with “deep” encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations. (JINS, 1996, 2, 240–248.)

2005 ◽  
Vol 17 (10) ◽  
pp. 1652-1666 ◽  
Author(s):  
Roberto Caldara ◽  
Philippe Schyns ◽  
Eugéne Mayer ◽  
Marie L. Smith ◽  
Frédéric Gosselin ◽  
...  

One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 67-67 ◽  
Author(s):  
H Hill ◽  
R Watt

The first task of any face processing system is detection of the face. We studied how the human visual system achieves face detection using a 2AFC task in which subjects were required to detect a face in the image of a natural scene. Luminance noise was added to the stimuli and performance was measured as a function of orientation and orientation bandwidth of the noise. Sensitivity levels and the effects of orientation bandwidth were similar for horizontally and vertically oriented noise. Performance was reduced for the smallest orientation bandwidth (5.6°) noise but sensitivity did not decrease further with increasing bandwidth until a point between 45° and 90°. The results suggest that important information may be oriented close to the vertical and horizontal. To test whether the results were specific to the task of face detection the same noise was added to the images in a man-made natural decision task. Equivalent levels of noise were found to be more disruptive and the effect of orientation bandwidth was different. The results are discussed in terms of models of face processing making use of oriented filters (eg Watt and Dakin, 1993 Perception22 Supplement, 12) and local energy models of feature detection (Morrone and Burr, 1988 Proceedings of the Royal Society of London B235 221 – 245).


1998 ◽  
Vol 06 (03) ◽  
pp. 281-298 ◽  
Author(s):  
Terry Huntsberger ◽  
John Rose ◽  
Shashidhar Ramaka

The human face is one of the most important patterns our visual system receives. It establishes a person's identity and also plays a significant role in everyday communication. Humans can recognize familiar faces under varying lighting conditions, different scales, and even after the face has changed due to aging, hair style, glasses, or facial hair. Our ease at recognizing faces is a strong motivation for the investigation of computational models of face processing. This paper presents a newly developed face processing system called Fuzzy-Face that combines wavelet pre-processing of input with a fuzzy self-organizing feature map algorithm. The wavelet-derived face space is partitioned into fuzzy sets which are characterized by face exemplars and membership values to those exemplars. This system learns faces using relatively few training epochs, has total recall for faces it has been shown, generalizes to face images that are acquired under different lighting conditions, and has rudimentary gender discrimination capabilities. We also include the results of some experimental studies.


2001 ◽  
Vol 15 (4) ◽  
pp. 275-285 ◽  
Author(s):  
Melissa S. James ◽  
Stuart J. Johnstone ◽  
William G. Hayward

Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.


2019 ◽  
Vol 40 (9) ◽  
pp. 2581-2595 ◽  
Author(s):  
Daniel B. Elbich ◽  
Peter C.M. Molenaar ◽  
K. Suzanne Scherf

2021 ◽  
Vol 11 (7) ◽  
pp. 942
Author(s):  
Antonio Maffei ◽  
Jennifer Goertzen ◽  
Fern Jaspers-Fayer ◽  
Killian Kleffner ◽  
Paola Sessa ◽  
...  

Behavioral and electrophysiological correlates of the influence of task demands on the processing of happy, sad, and fearful expressions were investigated in a within-subjects study that compared a perceptual distraction condition with task-irrelevant faces (e.g., covert emotion task) to an emotion task-relevant categorization condition (e.g., overt emotion task). A state-of-the-art non-parametric mass univariate analysis method was used to address the limitations of previous studies. Behaviorally, participants responded faster to overtly categorized happy faces and were slower and less accurate to categorize sad and fearful faces; there were no behavioral differences in the covert task. Event-related potential (ERP) responses to the emotional expressions included the N170 (140–180 ms), which was enhanced by emotion irrespective of task, with happy and sad expressions eliciting greater amplitudes than neutral expressions. EPN (200–400 ms) amplitude was modulated by task, with greater voltages in the overt condition, and by emotion, however, there was no interaction of emotion and task. ERP activity was modulated by emotion as a function of task only at a late processing stage, which included the LPP (500–800 ms), with fearful and sad faces showing greater amplitude enhancements than happy faces. This study reveals that affective content does not necessarily require attention in the early stages of face processing, supporting recent evidence that the core and extended parts of the face processing system act in parallel, rather than serially. The role of voluntary attention starts at an intermediate stage, and fully modulates the response to emotional content in the final stage of processing.


1991 ◽  
Vol 3 (1) ◽  
pp. 42-58 ◽  
Author(s):  
Lynn A. Hillger ◽  
Olivier Koenig

This article addresses three issues in face processing: First, is face processing primarily accomplished by the right hemisphere, or do both left- and right-hemisphere mechanisms play important roles? Second, are the mechanisms the same as those involved in general visual processing, or are they dedicated to face processing? Third, how can the mechanisms be characterized more precisely in terms of processes such as visual parsing? We explored these issues using the divided visual field methodology in four experiments. Experiments 1 and 2 provided evidence that both left- and right-hemisphere mechanisms are involved in face processing. In Experiment 1, a right-hemisphere advantage was found for both Same and Different trials when Same faces were identical and Different faces differed on all three internal facial features. Experiment 2 replicated the right-hemisphere advantage for Same trials but showed a left-hemisphere advantage for Different trials when one of three facial features differed between the target and the probe faces. Experiment 3 showed that the right-hemisphere advantage obtained with upright faces in Experiment 2 disappeared when the faces were inverted. This result suggests that there are right-hemisphere mechanisms specialized for processing upright faces, although it could not be determined whether these mechanisms are completely face-specific. Experiment 3 also provided evidence that the left-hemisphere mechanisms utilized in face processing tasks are general-purpose visual mechanisms not restricted to particular classes of visual stimuli. In Experiment 4, a left-hemisphere advantage was obtained when the task was to find one facial feature that was the same between the target and the probe faces. We suggest that left-hemisphere advantages shown in face processing are due to the parsing and analysis of the local elements of a face.


2003 ◽  
Vol 15 (3) ◽  
pp. 462-474 ◽  
Author(s):  
Galit Yovel ◽  
Jerre Levy ◽  
Marcia Grabowecky ◽  
Ken A. Paller

Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).


Sign in / Sign up

Export Citation Format

Share Document