scholarly journals Looking beyond the face area: lesion network mapping of prosopagnosia

Brain ◽  
2019 ◽  
Vol 142 (12) ◽  
pp. 3975-3990 ◽  
Author(s):  
Alexander L Cohen ◽  
Louis Soussand ◽  
Sherryse L Corrow ◽  
Olivier Martinaud ◽  
Jason J S Barton ◽  
...  

Face blindness can occur after injury to a variety of brain locations, and yet the regions critical for face recognition remain unclear. Cohen et al. show that lesions that cause face blindness map to a specific brain network, and use this to predict subclinical deficits in an independent lesion cohort.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5068
Author(s):  
Rita Goel ◽  
Irfan Mehmood ◽  
Hassan Ugail

Accurate identification of siblings through face recognition is a challenging task. This is predominantly because of the high degree of similarities among the faces of siblings. In this study, we investigate the use of state-of-the-art deep learning face recognition models to evaluate their capacity for discrimination between sibling faces using various similarity indices. The specific models examined for this purpose are FaceNet, VGGFace, VGG16, and VGG19. For each pair of images provided, the embeddings have been calculated using the chosen deep learning model. Five standard similarity measures, namely, cosine similarity, Euclidean distance, structured similarity, Manhattan distance, and Minkowski distance, are used to classify images looking for their identity on the threshold defined for each of the similarity measures. The accuracy, precision, and misclassification rate of each model are calculated using standard confusion matrices. Four different experimental datasets for full-frontal-face, eyes, nose, and forehead of sibling pairs are constructed using publicly available HQf subset of the SiblingDB database. The experimental results show that the accuracy of the chosen deep learning models to distinguish siblings based on the full-frontal-face and cropped face areas vary based on the face area compared. It is observed that VGGFace is best while comparing the full-frontal-face and eyes—the accuracy of classification being with more than 95% in this case. However, its accuracy degrades significantly when the noses are compared, while FaceNet provides the best result for classification based on the nose. Similarly, VGG16 and VGG19 are not the best models for classification using the eyes, but these models provide favorable results when foreheads are compared.


We Developed An Associate Approach To The Detection And Identification Of Human Faces And Describe A Operating, Near-Real-Time Face Recognition System That Tracks A Subject’s Face And So Acknowledges The Person By Comparison Characteristics Of The Face To Database. Our Approach Treats Face Recognition As A Two-Dimensional Recognition Downside, Taking Advantage Of The Very Fact That Faces Area Unit Area Unit Normally Upright And Therefore Is Also Delineate By A Small Set Of 2-D Characteristic Views. Face Pictures Are Projected Onto A Feature Area (“Face Space”) That Best Encodes The Variation Among Database Images. The Face Area Is Outlined By The “Eigenfaces”, That Area Unit The Eigenvectors Of The Set Of Faces; They Do Not Essentially Correspond To Isolated Options Like Eyes, Ears, And Noses. The Framework Provides The Flexibility To Be Told To Acknowledge New Faces


2019 ◽  
Vol 8 (3) ◽  
pp. 4123-4128

The Face recognition method is one of the authoritative biometric system in recognition methods to recognize the individual, because face is a distinctive biometric trait of an human being and it is the superior method of recognition. This paper proposes a novel Face recognition method by using extended LBP features. The pre-processing is carried out to extract the face area using viola-johns algorithm and all images are resized to 100x100. The LBP operator is applied on resized face images by rotating the each image by 15 degrees, i.e., at 7 degree left and 7 degree right and at zero degree to extract the feature vectors and final features are obtained by applying histogram technique. The SVM classifier is used for matching the database images with test images to measure the performance such as TSR, FAR, FRR & EER. The performance parameters are compared with existing algorithms for YALE and FERET database.


2010 ◽  
Vol 40-41 ◽  
pp. 523-530 ◽  
Author(s):  
Dong Cheng Shi ◽  
Qing Qing Wang

As the most successful method of linear distinguish, principal component analysis(PCA) method is widely used in identify areas, such as face recognition. But traditional PCA is influenced by light conditions, facial expression and it extracts the global features of the image, so the recognition rate is not very high. In order to improve more accurately identify facial features and extract local features which account for a larger contribution to the identification. This paper brings up a method of a block face recognition based on wavelet transform (WT-BPCA). In the algorithm, face images are done two-dimensional wavelet decomposition, then from which extract low frequency sub-images. According to different face area makes different contribution to recognition, we use sub-block PCA method. According to the contribution of the block recognition results generate weighting factors, the face recognition rate based on PCA is effectively improved. Finally we construct classification to recognite. Do experiments in the ORL face database. Results show that this method is superior to the method of the traditional PCA.


2014 ◽  
Vol 543-547 ◽  
pp. 2531-2534
Author(s):  
Hong Wei Di ◽  
Cai Yun Wang

To solve the problem that traditional automatic image clipping method is on the basic of simple principles, such as fixed size and fixed location, an improved arithmetic based on face recognition is proposed. Firstly, the face region is located by face detecting.Then according to the proportion of face area in the selected region on the template image,the clipping region size of the image to cut is matched.At last,through the relative position of the face center in the template image,the cutting position can be got. The experimental results show that this algorithm can achieve better clipping effect.


2018 ◽  
Vol 29 (9) ◽  
pp. 3590-3605 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

Abstract The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face–voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


2017 ◽  
Author(s):  
Gideon Rosenthal ◽  
Michal Tanzer ◽  
Erez Simony ◽  
Uri Hasson ◽  
Marlene Behrmann ◽  
...  

AbstractUsing a novel fMRI-based inter-subject functional correlation (ISFC) approach, which isolates stimulus-locked inter-regional correlation patterns, we compared the cortical topology of the neural circuit for face processing in participants with congenital prosopagnosia (CP) and matched controls. Whereas the anterior temporal lobe served as the major network hub for face processing in controls, this was not the case for the CPs. Instead, this group evinced hyper-connectivity in posterior regions of the visual cortex, mostly associated with the lateral occipital and the inferior temporal cortices. Moreover, the extent to which the network organization was atypical differed as a function of the severity of the face recognition deficit. These results offer new insights into the perturbed cortical topology in CP, which may serve as the underlying neural basis of the behavioral deficits typical of this disorder. The approach adopted here has the potential to uncover altered topologies in other neurodevelopmental disorders, as well.Significance StatementCongenital prosopagnosia (CP; ‘face blindness’), a developmental deficit in face recognition, is thought to affect up to 3% of the population. Understanding its neural basis is challenging as there is no obvious deficit on conventional structural or functional MRI scans. Using an innovative, fMRI-based inter-subject correlation approach geared towards tracking inter-regional stimulus-locked brain activation, the present study uncovers marked topological differences in a distributed brain network of higher-order visual regions in CP relative to controls. Alteration in topology also differs as a function of the severity of the deficit. These findings shed new light on the neural perturbations underlying CP, and the analytic approach we have adopted may have utility in elucidating the neural basis of other neurodevelopmental disorders such as dyslexia or amusia.


2017 ◽  
Author(s):  
Jodie Davies-Thompson ◽  
Giulia V. Elli ◽  
Mohamed Rezk ◽  
Stefania Benetti ◽  
Markus van Ackeren ◽  
...  

ABSTRACTThe brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face-and voice-selective regions of interest extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic Causal Modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area (FFA), and voice-selective temporal voice area (TVA), with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


Author(s):  
Reshma P ◽  
Muneer VK ◽  
Muhammed Ilyas P

Face recognition is a challenging task for the researches. It is very useful for personal verification and recognition and also it is very difficult to implement due to all different situation that a human face can be found. This system makes use of the face recognition approach for the computerized attendance marking of students or employees in the room environment without lectures intervention or the employee. This system is very efficient and requires very less maintenance compared to the traditional methods. Among existing methods PCA is the most efficient technique. In this project Holistic based approach is adapted. The system is implemented using MATLAB and provides high accuracy.


Sign in / Sign up

Export Citation Format

Share Document