N250 ERP Correlates of the Acquisition of Face Representations across Different Images

2009 ◽  
Vol 21 (4) ◽  
pp. 625-641 ◽  
Author(s):  
Jürgen M. Kaufmann ◽  
Stefan R. Schweinberger ◽  
A. Mike Burton

We used ERPs to investigate neural correlates of face learning. At learning, participants viewed video clips of unfamiliar people, which were presented either with or without voices providing semantic information. In a subsequent face-recognition task (four trial blocks), learned faces were repeated once per block and presented interspersed with novel faces. To disentangle face from image learning, we used different images for face repetitions. Block effects demonstrated that engaging in the face-recognition task modulated ERPs between 170 and 900 msec poststimulus onset for learned and novel faces. In addition, multiple repetitions of different exemplars of learned faces elicited an increased bilateral N250. Source localizations of this N250 for learned faces suggested activity in fusiform gyrus, similar to that found previously for N250r in repetition priming paradigms [Schweinberger, S. R., Pickering, E. C., Jentzsch, I., Burton, A. M., & Kaufmann, J. M. Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398–409, 2002]. Multiple repetitions of learned faces also elicited increased central–parietal positivity between 400 and 600 msec and caused a bilateral increase of inferior–temporal negativity (>300 msec) compared with novel faces. Semantic information at learning enhanced recognition rates. Faces that had been learned with semantic information elicited somewhat less negative amplitudes between 700 and 900 msec over left inferior–temporal sites. Overall, the findings demonstrate a role of the temporal N250 ERP in the acquisition of new face representations across different images. They also suggest that, compared with visual presentation alone, additional semantic information at learning facilitates postperceptual processing in recognition but does not facilitate perceptual analysis of learned faces.

CNS Spectrums ◽  
2001 ◽  
Vol 6 (1) ◽  
pp. 36-44,57-59 ◽  
Author(s):  
David J. Marcus ◽  
Charles A. Nelson

AbstractThis paper critically examines the literature on face recognition in autism, including a discussion of the neural correlates of this ability. The authors begin by selectively reviewing the behavioral and cognitive neuroscience research on whether faces are represented by a “special” behavioral and neural system—one distinct from object processing. The authors then offer a neuroconstructivist model that attempts to account for the robust finding that certain regions in the inferior temporal cortex are recruited in the service of face recognition. This is followed by a review of the evidence supporting the view that face recognition is atypical in individuals with autism. This face-recognition deficit may indicate a continued risk for the further development of social impairments. The authors conclude by speculating on the role of experience in contributing to this atypical developmental pattern and its implications for normal development of face processing.


2015 ◽  
Vol 112 (35) ◽  
pp. E4835-E4844 ◽  
Author(s):  
Meike Ramon ◽  
Luca Vizioli ◽  
Joan Liu-Shuang ◽  
Bruno Rossion

Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.


2016 ◽  
Author(s):  
Anya Chakraborty ◽  
Bhismadev Chakrabarti

AbstractWe live in an age of ‘selfies’. Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if visual processing of self-faces is different from other faces, using psychophysics and eye-tracking. Specifically, the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition was tested. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look at lower part of the face for longer duration for self-face compared to other-face. Participants with a reduced overlap between self and other face representations, as indexed by a steeper slope of the psychometric response curve for self-face recognition, spent a greater proportion of time looking at the upper regions of faces identified as self. Additionally, the association of autism-related traits with self-face processing metrics was tested, since autism has previously been associated with atypical self-processing, particularly in the psychological domain. Autistic traits were associated with reduced looking time to both self and other faces. However, no self-face specific association was noted with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.


2021 ◽  
Vol 15 ◽  
Author(s):  
Takahiro Sanada ◽  
Christoph Kapeller ◽  
Michael Jordan ◽  
Johannes Grünwald ◽  
Takumi Mitsuhashi ◽  
...  

Face recognition is impaired in patients with prosopagnosia, which may occur as a side effect of neurosurgical procedures. Face selective regions on the ventral temporal cortex have been localized with electrical cortical stimulation (ECS), electrocorticography (ECoG), and functional magnetic resonance imagining (fMRI). This is the first group study using within-patient comparisons to validate face selective regions mapping, utilizing the aforementioned modalities. Five patients underwent surgical treatment of intractable epilepsy and joined the study. Subdural grid electrodes were implanted on their ventral temporal cortices to localize seizure foci and face selective regions as part of the functional mapping protocol. Face selective regions were identified in all patients with fMRI, four patients with ECoG, and two patients with ECS. From 177 tested electrode locations in the region of interest (ROI), which is defined by the fusiform gyrus and the inferior temporal gyrus, 54 face locations were identified by at least one modality in all patients. fMRI mapping showed the highest detection rate, revealing 70.4% for face selective locations, whereas ECoG and ECS identified 64.8 and 31.5%, respectively. Thus, 28 face locations were co-localized by at least two modalities, with detection rates of 89.3% for fMRI, 85.7% for ECoG and 53.6 % for ECS. All five patients had no face recognition deficits after surgery, even though five of the face selective locations, one obtained by ECoG and the other four by fMRI, were within 10 mm to the resected volumes. Moreover, fMRI included a quite large volume artifact on the ventral temporal cortex in the ROI from the anatomical structures of the temporal base. In conclusion, ECS was not sensitive in several patients, whereas ECoG and fMRI even showed activation within 10 mm to the resected volumes. Considering the potential signal drop-out in fMRI makes ECoG the most reliable tool to identify face selective locations in this study. A multimodal approach can improve the specificity of ECoG and fMRI, while simultaneously minimizing the number of required ECS sessions. Hence, all modalities should be considered in a clinical mapping protocol entailing combined results of co-localized face selective locations.


2001 ◽  
Vol 15 (4) ◽  
pp. 275-285 ◽  
Author(s):  
Melissa S. James ◽  
Stuart J. Johnstone ◽  
William G. Hayward

Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.


1979 ◽  
Vol 167 (2) ◽  
pp. 259-272 ◽  
Author(s):  
Lynne Seacord ◽  
Charles G. Gross ◽  
Mortimer Mishkin

2015 ◽  
Vol 112 (24) ◽  
pp. E3123-E3130 ◽  
Author(s):  
Ning Liu ◽  
Fadila Hadj-Bouziane ◽  
Katherine B. Jones ◽  
Janita N. Turchi ◽  
Bruno B. Averbeck ◽  
...  

Increasing evidence has shown that oxytocin (OT), a mammalian hormone, modifies the way social stimuli are perceived and the way they affect behavior. Thus, OT may serve as a treatment for psychiatric disorders, many of which are characterized by dysfunctional social behavior. To explore the neural mechanisms mediating the effects of OT in macaque monkeys, we investigated whether OT would modulate functional magnetic resonance imaging (fMRI) responses in face-responsive regions (faces vs. blank screen) evoked by the perception of various facial expressions (neutral, fearful, aggressive, and appeasing). In the placebo condition, we found significantly increased activation for emotional (mainly fearful and appeasing) faces compared with neutral faces across the face-responsive regions. OT selectively, and differentially, altered fMRI responses to emotional expressions, significantly reducing responses to both fearful and aggressive faces in face-responsive regions while leaving responses to appeasing as well as neutral faces unchanged. We also found that OT administration selectively reduced functional coupling between the amygdala and areas in the occipital and inferior temporal cortex during the viewing of fearful and aggressive faces, but not during the viewing of neutral or appeasing faces. Taken together, our results indicate homologies between monkeys and humans in the neural circuits mediating the effects of OT. Thus, the monkey may be an ideal animal model to explore the development of OT-based pharmacological strategies for treating patients with dysfunctional social behavior.


Sign in / Sign up

Export Citation Format

Share Document