scholarly journals Encoding of facial features by single neurons in the human amygdala and hippocampus

2020 ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Nicholas J. Brandmeir ◽  
Shuo Wang

AbstractThe human amygdala and hippocampus play a key role in face processing. However, it has been unknown how the neurons in the human amygdala and hippocampus encode facial feature information and directs eye movements to salient facial features such as the eyes and mouth. In this study, we identified a population of neurons that differentiated fixations on the eyes vs. mouth. The response of these feature-selective neurons was not dependent on fixation order, and eye-preferring and mouth-preferring neurons were not of different neuronal types. We found another population of neurons that differentiated saccades to the eyes vs. mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played a different role in encoding face features. Lastly, we revealed two functional roles of feature-selective neurons that they encoded the salient region for face recognition and they encoded perceived social trait judgment. Together, we revealed and characterized a new class of neurons that encoded facial features. These neurons may play an important role in social perception and recognition of faces.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Nicholas J. Brandmeir ◽  
Shuo Wang

AbstractFaces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 486
Author(s):  
Chunxue Wu ◽  
Bobo Ju ◽  
Yan Wu ◽  
Neal N. Xiong ◽  
Sheng Zhang

Artificial intelligence technology plays an increasingly important role in human life. For example, distinguishing different people is an essential capability of many intelligent systems. To achieve this, one possible technical means is to perceive and recognize people by optical imaging of faces, so-called face recognition technology. After decades of research and development, especially the emergence of deep learning technology in recent years, face recognition has made great progress with more and more applications in the fields of security, finance, education, social security, etc. The field of computer vision has become one of the most successful branch areas. With the wide application of biometrics technology, bio-encryption technology came into being. Aiming at the problems of classical hash algorithm and face hashing algorithm based on Multiscale Block Local Binary Pattern (MB-LBP) feature improvement, this paper proposes a method based on Generative Adversarial Networks (GAN) to encrypt face features. This work uses Wasserstein Generative Adversarial Networks Encryption (WGAN-E) to encrypt facial features. Because the encryption process is an irreversible one-way process, it protects facial features well. Compared with the traditional face hashing algorithm, the experimental results show that the face feature encryption algorithm has better confidentiality.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


Author(s):  
CHING-WEN CHEN ◽  
CHUNG-LIN HUANG

This paper presents a face recognition system which can identify the unknown identity effectively using the front-view facial features. In front-view facial feature extractions, we can capture the contours of eyes and mouth by the deformable template model because of their analytically describable shapes. However, the shapes of eyebrows, nostrils and face are difficult to model using a deformable template. We extract them by using the active contour model (snake). After the contours of all facial features have been captured, we calculate effective feature values from these extracted contours and construct databases for unknown identities classification. In the database generation phase, 12 models are photographed, and feature vectors are calculated for each portrait. In the identification phase if any one of these 12 persons has his picture taken again, the system can recognize his identity.


Author(s):  
CHIN-CHEN CHANG ◽  
YUAN-HUI YU

This paper proposes an efficient approach for human face detection and exact facial features location in a head-and-shoulder image. This method searches for the eye pair candidate as a base line by using the characteristic of the high intensity contrast between the iris and the sclera. To discover other facial features, the algorithm uses geometric knowledge of the human face based on the obtained eye pair candidate. The human face is finally verified with these unclosed facial features. Due to the merits of applying the Prune-and-Search and simple filtering techniques, we have shown that the proposed method indeed achieves very promising performance of face detection and facial feature location.


2004 ◽  
Vol 48 (1) ◽  
pp. 89-99
Author(s):  
Frank Eyetsemitan

This study initially set out to explore the facial features (and their descriptions) of the emotion-expressive behaviors of “peace” and “contentment” but ended up with a third one, “annoyed/irritated.” The emotion-expressive behaviors of “peace” and “contentment” have been associated with the faces of deceased persons in a previous study. The pictures of two volunteers taken during a class on relaxation technique were given to 93 respondents made up of volunteer students from a small midwestern college and volunteer residents of a nursing home (see Appendix A and B). Participants were asked to choose from a list provided them the emotion-expressive behavior (“e.g., peace, content, hopeful, other”) that closely described each of the facial pictures presented. They were also asked to both identify and describe the facial feature(s) that closely matched the emotion-expressive behavior they had chosen. Most of the respondents identified Picture #1 as “peaceful” and Picture 2 as “annoyed/irritated.” The eyes and the mouth were more salient in describing both emotions. This study has implications for those who identify loved ones before viewing; for individuals who prepare deceased persons for viewing; for embalming educators; and for actors of these emotions.


2016 ◽  
Vol 36 (4) ◽  
pp. 395-401 ◽  
Author(s):  
L Gao ◽  
Y Liu ◽  
Y Wen ◽  
W Wu

Long noncoding RNAs (lncRNAs) are the new class of transcripts and pervasively transcribed in the genome, which have been found to play important functional roles in many tissues and organs. LncRNAs can interact with target gene to exert their functions. However, the function and mechanism of lncRNA in cleft palate (CP) development remain elusive. Here, we investigated the role of lncRNA H19 and its target gene insulin-like growth factor 2 (IGF2) in CP of mice. All-trans retinoic acid (atRA) is a well-known teratogenic effecter of CP. After establishment of the CP mouse model using atRA in vivo, we found that the rate of CP in mice was 100%. The tail lengths of fetuses in atRA-treated mice were shorter than those of control mice from embryonic day (E)12 to E17. The expression of lncRNA H19 and IGF2 were embryo age-related differences between atRNA-treated and control mice. In addition, the the relationship between lncRNA H19 and IGF2 were negative correlation in the critical period of developmental palate. These findings suggest that lncRNA H19 mediate atRA-induced CP in mice.


Author(s):  
Yuanyuan Liu ◽  
Jingying Chen ◽  
Cunjie Shan ◽  
Zhiming Su ◽  
Pei Cai

Head pose and facial feature detection are important for face analysis. However, many studies reported good results in constrained environment, the performance could be decreased due to the high variations in facial appearance, poses, illumination, occlusion, expression and make-up. In this paper, we propose a hierarchical regression approach, Dirichlet-tree enhanced random forests (D-RF) for face analysis in unconstrained environment. D-RF introduces Dirichlet-tree probabilistic model into regression RF framework in the hierarchical way to achieve the efficiency and robustness. To eliminate noise influence of unconstrained environment, facial patches extracted from face area are classified as positive or negative facial patches, only positive facial patches are used for face analysis. The proposed hierarchical D-RF works in two iterative procedures. First, coarse head pose is estimated to constrain the facial features detection, then the head pose is updated based on the estimated facial features. Second, the facial feature localization is refined based on the updated head pose. In order to further improve the efficiency and robustness, multiple probabilitic models are learned in leaves of the D-RF, i.e. the patch’s classification, the head pose probabilities, the locations of facial points and face deformation models (FDM). Moreover, our algorithm takes a composite weight voting method, where each patch extracted from the image can directly cast a vote for the head pose or each of the facial features. Extensive experiments have been done with different publicly available databases. The experimental results demonstrate that the proposed approach is robust and efficient for head pose and facial feature detection.


Sign in / Sign up

Export Citation Format

Share Document