face features
Recently Published Documents


TOTAL DOCUMENTS

212
(FIVE YEARS 82)

H-INDEX

20
(FIVE YEARS 3)

2022 ◽  
Vol 2161 (1) ◽  
pp. 012008
Author(s):  
Roy Ashish ◽  
B G Prasad

Abstract The aging process creates significant changes in the appearances of people’s faces. When compared to other causes of variation in face imaging, aging-related variation has specific distinct properties. Facial Aging variations, for example, is unique for each person; it occurs gradually and is significantly influenced by other characteristics including health, gender, and life-style. As a result, the proposed effort will use Generative Adversarial Networks to address these critical concerns (GANs). Generative Adversarial Networks (GAN’s) is made up of a generator and a discriminator network. The generator model generates images that a discriminator model analyses to determine if they are real or fake. This paper provides a Temporal Face Feature Progressive framework with Cycle GAN, which maintains the initial appearance and identity in the elderly aspect of their facial structure. To address aging concerns, our goal is to transform an initial age category image into a targeted age with age progression. We show that our temporal face features progressive cycle GAN learns and transfers facial traits from the source group to the targeted group by training various images. The IMDB-WIKI Face dataset has been used to obtain the results for the same.


Abstract: In this era of digitalization, everything is interlinked and are online. Maximum of things are using ML (Machine Learning), AI (Artificial Intelligent), IoT, Data Science etc. Making use of this, an automated attendance system can be built. So, this project is proposing “Digital Attendance System” using “Face Recognition Technique”. Entering and keeping information in database and using algorithm to extract the face features, this way face recognition technique is achieved. And this technique is used to compare the captured image of source with that of database, resulting in Digital Attendance System which can be used to mark the attendance and so the motive is achieved. Keywords: Attendance system, Face Recognition Technique, dlib library, High resolution camera.


2021 ◽  
Author(s):  
Sarah McCrackin ◽  
Jelena Ristic ◽  
Florence Mayrand ◽  
Francesca Capozzi

With the widespread adoption of masks, there is a need for understanding how facial obstruction affects emotion recognition. We asked 120 participants to identify emotions from faces with and without masks. We also examined if recognition performance was related to autistic traits and personality. Masks impacted recognition of expressions with diagnostic lower face features the most and those with diagnostic upper face features the least. Persons with higher autistic traits were worse at identifying unmasked expressions, while persons with lower extraversion and higher agreeableness were better at recognizing masked expressions. These results show that different features play different roles in emotion recognition and suggest that obscuring features affects social communication differently as a function of autistic traits and personality.


Author(s):  
Yuxiang Long

Face recognition is difficult due to the higher dimension of face image features and fewer training samples. Firstly, in order to improve the performance of feature extraction, we inventively construct a double hierarchical network structure convolution neural network (CNN) model. The front-end network adopts a relatively simple network model to achieve rough feature extraction from input images and obtain multiple suspect face candidate windows. The back-end network uses a relatively complex network model to filter the best detection window and return the face size and position by nonmaximum suppression. Then, in order to fully extract the face features in the optimal window, a face recognition algorithm based on intermediate layers connected by the deep CNN is proposed in this paper. Based on AlexNet, the front, intermediate and end convolution layers are combined by deep connection. Then, the feature vector describing the face image is obtained by the operation of the pooling layer and the full connection layer. Finally, the auxiliary classifier training method is used to train the model to ensure the effectiveness of the features of the intermediate layer. Experimental results based on open face database show that the recognition accuracy of the proposed algorithm is higher than that of other face recognition algorithms compared in this paper.


2021 ◽  
Author(s):  
Alice Gomez ◽  
Guillaume Lio ◽  
Manuela Costa ◽  
Angela Sirigu ◽  
Caroline Demily

Abstract Background: Williams syndrome (WS) and Autism Spectrum Disorders (ASD) are psychiatric conditions associated with atypical but opposite face-to-face interactions patterns: WS patients overly stare at others, ASD individuals escape eye contact. Whether these behaviors result from dissociable visual processes within the occipito-temporal pathways is unknown. Using high-density electroencephalography, multivariate signal processing algorithms and a protocol designed to identify and extract evoked activities sensitive to facial cues, we investigated how WS (N=14), ASD (N=14) and neurotypical subjects (N=14) decode the information content of a face stimulus. Results: We found two neural components in neurotypical participants, both strongest when the eye region was projected onto the subject's fovea, simulating a direct eye contact situation, and weakest over more distant regions, reaching a minimum when the focused region was outside the stimulus face. The first component peaks at 170ms, an early signal known to be implicated in low-level face features. The second is identified later, 260ms post-stimulus onset and is implicated in decoding salient face social cues.Remarkably, both components were found distinctly impaired and preserved in WS and ASD. In WS, we could weakly decode the 170ms signal based on our regressor relative to facial features, probably due to their relatively poor ability to process faces’ morphology, while the late 260ms component was highly significant. The reverse pattern was observed in ASD participants who showed neurotypical like early 170ms evoked activity but impaired late evoked 260ms signal. Conclusions: Our study reveals a dissociation between WS and ASD patients and point at different neural origins for their social impairments.


2021 ◽  
Author(s):  
◽  
Gates Henderson

<p>Face perception depends on a network of brain areas that selectively respond to faces over non-face stimuli. These face-selective areas are involved in different aspects of face perception, but what specific process is implemented in a particular region remains little understood. A candidate processisholistic face processing, namely the integration of visual information across the whole of an upright face. In this thesis, I report two experimentsthat examine whether the occipital face area (OFA), a face-selective region in the inferior occipital gyrus, performs holistic processing for categorising a stimulus as a face. Both experiments were conducted using online, repetitive transcranial magnetic stimulation (TMS) to disrupt activity in the brain while participants performed face perception tasks. Experiment 1 was a localiser in which participants completed two face identification tasks while receiving TMS at OFA or vertex. Participants’ accuracy decreased for one of the tasks as a result of OFA but not vertex stimulation. This result confirms that OFA could be localised and its activity disrupted. Experiment 2 was a test of holistic processing in which participants categorised ambiguous two-tone images as faces or non-faces while TMS was delivered to OFA or vertex. Participants’ accuracy and response times were unchanged as a result of either stimulation. This result suggests that the OFA is not engaged in holistic processing for categorising a stimulus as a face. Overall, the currentresults are more consistent with previous studies suggesting that OFA is involved in processing of local face features/details rather than the whole face.</p>


2021 ◽  
Author(s):  
◽  
Gates Henderson

<p>Face perception depends on a network of brain areas that selectively respond to faces over non-face stimuli. These face-selective areas are involved in different aspects of face perception, but what specific process is implemented in a particular region remains little understood. A candidate processisholistic face processing, namely the integration of visual information across the whole of an upright face. In this thesis, I report two experimentsthat examine whether the occipital face area (OFA), a face-selective region in the inferior occipital gyrus, performs holistic processing for categorising a stimulus as a face. Both experiments were conducted using online, repetitive transcranial magnetic stimulation (TMS) to disrupt activity in the brain while participants performed face perception tasks. Experiment 1 was a localiser in which participants completed two face identification tasks while receiving TMS at OFA or vertex. Participants’ accuracy decreased for one of the tasks as a result of OFA but not vertex stimulation. This result confirms that OFA could be localised and its activity disrupted. Experiment 2 was a test of holistic processing in which participants categorised ambiguous two-tone images as faces or non-faces while TMS was delivered to OFA or vertex. Participants’ accuracy and response times were unchanged as a result of either stimulation. This result suggests that the OFA is not engaged in holistic processing for categorising a stimulus as a face. Overall, the currentresults are more consistent with previous studies suggesting that OFA is involved in processing of local face features/details rather than the whole face.</p>


Author(s):  
Lingqiu Zeng ◽  
Yang Wang ◽  
Qingwen Han ◽  
Kun Zhou ◽  
Lei Ye ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2539
Author(s):  
Hongyan Zou ◽  
Xinyan Sun

Face recognition is one of the essential applications in computer vision, while current face recognition technology is mainly based on 2D images without depth information, which are easily affected by illumination and facial expressions. This paper presents a fast face recognition algorithm combining 3D point cloud face data with deep learning, focusing on key part of face for recognition with an attention mechanism, and reducing the coding space by the sparse loss function. First, an attention mechanism-based convolutional neural network was constructed to extract facial features to avoid expressions and illumination interference. Second, a Siamese network was trained with a sparse loss function to minimize the face coding space and enhance the separability of the face features. With the FRGC face dataset, the experimental results show that the proposed method could achieve the recognition accuracy of 95.33%.


2021 ◽  
Vol 21 (9) ◽  
pp. 1951
Author(s):  
Maximilian Davide Broda ◽  
Benjamin de Haas
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document