EEG Analysis to Decode Human Memory Responses in Face Recognition Task Using Deep LSTM Network

2016 ◽  
Vol 33 (S1) ◽  
pp. S367-S368
Author(s):  
N. Deltort ◽  
J.R. Cazalets ◽  
A. Amestoy ◽  
M. Bouvard

Studies on individuals without developmental disorder show that mental representation of self-face is subject to a multimodal process in the same way that the representation of the self-body is. People with autistic spectrum disorder (ASD) have a particular pattern of face processing and a multimodal integration deficit.The objectives of our study were to evaluate the self-face recognition and the effect of interpersonal multisensory stimulation (IMS) in individuals with ASD. We aimed to show a self-face recognition deficit and a lack of multimodal integration among this population.IMS consisted of the presentation of a movie displaying an unfamiliar face being touched intermittently, while the examiner applied the same stimulation synchronously or asynchronously on the participant. The effect resulting from IMS was measured on two groups with or without ASD by a self-face recognition task on morphing movies made from self-face and unfamiliar-face pictures.There was a significant difference between groups on self-recognition before stimulation. This result shows a self-face recognition deficit in individuals with ASD. Results for the control group showed a significant effect of IMS on self-face recognition in synchronous condition. This suggests the existence of an update of self-face mental representation by multimodal process. In contrast, there was no significant effect of IMS demonstrated in ASD group, suggesting a multimodal integration deficit for the constitution of self-representation in this population.Our results show the existence of a self-face recognition deficit in individuals with ASD, which may be linked to a lack of multimodal integration in the development of the self-face representation.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2020 ◽  
Author(s):  
Anna Stumps ◽  
Elyana Saad ◽  
David Rothlein ◽  
Mieke Verfaellie ◽  
Joseph DeGutis

Converging lines of research suggests that many developmental prosopagnosics (DPs) have impairments beyond face perception, but currently no framework exists to characterize these impaired mechanisms. One potential extra-perceptual deficit is that DPs encode/retrieve faces in a distinct manner from controls that does not sufficiently support individuation. To test this possibility, 30 DPs and 30 matched controls performed an old/new face recognition task while providing confidence ratings, to which a model-based ROC analysis was applied. DPs had significantly reduced recollection compared to controls, driven by fewer ‘high-confidence target’ responses, but intact familiarity. Recollection and face perception ability uniquely predicted objective and subjective prosopagnosia symptoms, together explaining 51% and 56% of the variance, respectively. These results suggest that a specific deficit in face recollection in DP may represent a core aspect of the difficulty in confidently identifying an individual by their face.


Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.


1974 ◽  
Vol 39 (1) ◽  
pp. 109-110 ◽  
Author(s):  
Merideth Going ◽  
J. D. Read

In a face-recognition task, highly unique faces were correctly identified more often than low-unique faces and female faces more frequently than male faces. At both levels of uniqueness, female Ss recognized female faces more frequently than male faces whereas male Ss recognized male and female faces with equal facility.


Perception ◽  
10.1068/p5637 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1334-1352 ◽  
Author(s):  
Simone K Favelle ◽  
Stephen Palmisano ◽  
Ryan T Maloney

Previous research into the effects of viewpoint change on face recognition has typically dealt with rotations around the head's vertical axis (yaw). Another common, although less studied, source of viewpoint variation in faces is rotation around the head's horizontal pitch axis (pitch). In the current study we used both a sequential matching task and an old/new recognition task to examine the effect of viewpoint change following rotation about both pitch and yaw axes on human face recognition. The results of both tasks showed that recognition performance was better for faces rotated about yaw compared to pitch. Further, recognition performance for faces rotated upwards on the pitch axis was better than for faces rotated downwards. Thus, equivalent angular rotations about pitch and yaw do not produce equivalent viewpoint-dependent declines in recognition performance.


2013 ◽  
Vol 760-762 ◽  
pp. 1615-1620 ◽  
Author(s):  
Xiao Yuan Jing ◽  
Wen Qian Li ◽  
Hao Gao ◽  
Yong Fang Yao ◽  
Jiang Yue Man

As one of the most popular research topics, sparse representation (SR) technique has been successfully employed to solve face recognition task. Though current SR based methods prove to achieve high classification accuracy, they implicitly assume that the losses of all misclassifications are the same. However, in many real-world face recognition applications, this assumption may not hold as different misclassifications could lead to different losses. Driven by this concern, in this paper, we propose a cost-sensitive sparsity preserving projections (CSSPP) for face recognition. CSSPP considers the cost information of sparse representation while calculating the sparse structure of the training set. Then, CSSPP employs the sparsity preserving projection method to achieve the projection transform and keeps the sparse structure in the low-dimensional space. Experimental results on the public AR and FRGC face databases are presented to demonstrate that both of the proposed approaches can achieve high recognition rate and low misclassification loss, which validate the efficacy of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document