Medial Prefrontal and Occipito-Temporal Activity At Encoding Determine Enhanced Recognition of Threatening Faces After 1.5 Years

Author(s):  
Xiqin LIU ◽  
Xinqi Zhou ◽  
Yixu Zeng ◽  
Jialin Li ◽  
Weihua Zhao ◽  
...  

Abstract Studies demonstrated that faces with threatening emotional expressions are better remembered than non-threatening faces. However, whether this memory advantage persists over years and which neural systems underlie such an effect remains unknown. Here, we employed an individual difference approach to examine whether the neural activity during incidental encoding was associated with differential recognition of faces with emotional expressions (angry, fearful, happy, sad and neutral) after a retention interval of > 1.5 years (N = 89). Behaviorally, we found a better recognition for threatening (angry, fearful) versus non-threatening (happy and neutral) faces after a > 1.5 years delay, which was driven by forgetting of non-threatening faces compared with immediate recognition after encoding. Multivariate principal component analysis (PCA) on the behavioral responses further confirmed the discriminative recognition performance between threatening and non-threatening faces. A voxel-wise whole-brain analysis on the concomitantly acquired functional magnetic imaging (fMRI) data during incidental encoding revealed that neural activity in bilateral inferior occipital gyrus (IOG) and ventromedial prefrontal/orbitofrontal cortex (vmPFC/OFC) was associated with the individual differences in the discriminative emotional face recognition performance measured by an innovative behavioral pattern similarity analysis (BPSA) based on inter-subject correlation (ISC). The left fusiform face area (FFA) was additionally determined using a regionally focused analysis. Overall, the present study provides evidence that threatening facial expressions lead to persistent face recognition over periods of > 1.5 years and differential encoding-related activity in the medial prefrontal cortex and occipito-temporal cortex may underlie this effect.

2014 ◽  
Vol 971-973 ◽  
pp. 1838-1842 ◽  
Author(s):  
Qi Rong Zhang

In this paper, we propose a new face recognition approach for image feature extraction named two-dimensional parameter principal component analysis (2DPPCA). Two-dimensional principal component analysis (2DPCA) is widely used in face recognition. We further study on the basis of 2DPCA. This proposed method is to add a parameter to images samples matrix in the image covariance matrix. Extensive experiments are performed on FERET face database and CMU PIE face database. The 2DPPCA method achieves better face recognition performance than PCA, 2DPCA, especially on the CMU PIE face database.


Author(s):  
Alexander Agung Santoso Gunawan ◽  
Reza A Prasetyo

There are many real world applications of face recognition which require good performance in uncontrolled environments such as social networking, and environment surveillance. However, many researches of face recognition are done in controlled situations. Compared to the controlled environments, face recognition in uncontrolled environments comprise more variation, for example in the pose, light intensity, and expression. Therefore, face recognition in uncontrolled conditions is more challenging than in controlled settings. In thisresearch, we would like to discuss handling pose variations in face recognition. We address the representation issue us ing multi-pose of face detection based on yaw angle movement of the head as extensions of the existing frontal face recognition by using Principal Component Analysis (PCA). Then, the matching issue is solved by using Euclidean distance. This combination is known as Eigenfaces method. The experiment is done with different yaw angles and different threshold values to get the optimal results. The experimental results show that: (i) the more pose variation of face images used as training data is, the better recognition results are, but it also increases the processing time, and (ii) the lower threshold value is, the harder it recognizes a face image, but it also increases the accuracy.


2018 ◽  
Author(s):  
Daisuke Matsuyoshi ◽  
Katsumi Watanabe

AbstractThe 20-item prosopagnosia index (PI20) is a self-report measure of face recognition ability, which is aimed to assess the risk for developmental prosopagnosia (DP), developed by Shah, Gaule, Sowden, Bird, and Cook (2015). Although they validated PI20 in several ways and it may serve as a quick and cost-effective measure for estimating DP risk (Livingston & Shah, in press; Shah et al., 2015), they did not formally evaluate its validity against a pre-existing alternative questionnaire (Kennerknecht et al., 2008) even though they criticized the weak relationship of the pre-existing questionnaire to actual behavioral face recognition performance. Thus, we administered the questionnaires to a large population (N = 855) and found a very strong correlation (r = 0.82 [95% confidence interval: 0.80, 0.84]), a principal component that accounted for more than 90% of the variance, and comparable reliability between the questionnaires. These results show unidimensionality and equivalence between the two questionnaires, or at least, a very strong common latent factor underlying them. The PI20 may not be greater than the pre-existing questionnaire; the two questionnaires measured essentially the same trait. The intrinsic equivalence between the questionnaires necessitates a revision of the view that the PI20 overcomes the weakness of the pre-existing questionnaire. Because both questionnaires contained unreliable items, we suggest, instead of using either questionnaire alone, that selection of a set of items with high reliability may offer a more robust approach to capture face recognition ability.


2020 ◽  
pp. 1-11
Author(s):  
Mayamin Hamid Raha ◽  
Tonmoay Deb ◽  
Mahieyin Rahmun ◽  
Tim Chen

Face recognition is the most efficient image analysis application, and the reduction of dimensionality is an essential requirement. The curse of dimensionality occurs with the increase in dimensionality, the sample density decreases exponentially. Dimensionality Reduction is the process of taking into account the dimensionality of the feature space by obtaining a set of principal features. The purpose of this manuscript is to demonstrate a comparative study of Principal Component Analysis and Linear Discriminant Analysis methods which are two of the highly popular appearance-based face recognition projection methods. PCA creates a flat dimensional data representation that describes as much data variance as possible, while LDA finds the vectors that best discriminate between classes in the underlying space. The main idea of PCA is to transform high dimensional input space into the function space that displays the maximum variance. Traditional LDA feature selection is obtained by maximizing class differences and minimizing class distance.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


2021 ◽  
Vol 11 (14) ◽  
pp. 6387
Author(s):  
Li Xu ◽  
Jianzhong Hu

Active infrared thermography (AIRT) is a significant defect detection and evaluation method in the field of non-destructive testing, on account of the fact that it promptly provides visual information and that the results could be used for quantitative research of defects. At present, the quantitative evaluation of defects is an urgent problem to be solved in this field. In this work, a defect depth recognition method based on gated recurrent unit (GRU) networks is proposed to solve the problem of insufficient accuracy in defect depth recognition. AIRT is applied to obtain the raw thermal sequences of the surface temperature field distribution of the defect specimen. Before training the GRU model, principal component analysis (PCA) is used to reduce the dimension and to eliminate the correlation of the raw datasets. Then, the GRU model is employed to automatically recognize the depth of the defect. The defect depth recognition performance of the proposed method is evaluated through an experiment on polymethyl methacrylate (PMMA) with flat bottom holes. The results indicate that the PCA-processed datasets outperform the raw temperature datasets in model learning when assessing defect depth characteristics. A comparison with the BP network shows that the proposed method has better performance in defect depth recognition.


Sign in / Sign up

Export Citation Format

Share Document