Swapping Face Images Based on Augmented Facial Landmarks and Its Detection

Author(s):  
Chiranjeevi Sadu ◽  
Pradip K. Das
Keyword(s):  
Author(s):  
Ulrich Scherhag ◽  
Dhanesh Budhrani ◽  
Marta Gomez-Barrero ◽  
Christoph Busch
Keyword(s):  

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

AbstractMethods using salient facial patches (SFPs) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and they do not consider head position variations. We contend that SFP can be an effective approach for recognizing facial expressions under different head rotations. Accordingly, we propose an algorithm, called profile salient facial patches (PSFP), to achieve this objective. First, to detect facial landmarks and estimate head poses from profile face images, a tree-structured part model is used for pose-free landmark localization. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks while avoiding their overlap or the transcending of the actual face range. To analyze the PSFP recognition performance, three classical approaches for local feature extraction, specifically the histogram of oriented gradients (HOG), local binary pattern, and Gabor, were applied to extract profile facial expression features. Experimental results on the Radboud Faces Database show that PSFP with HOG features can achieve higher accuracies under most head rotations.


2020 ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

Abstract Methods using salient facial patches (SFP) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and do not consider variations of head position. In our view, SFP can also be a good choice to recognize facial expression under different head rotations, and thus we propose an algorithm for this purpose, called Profile Salient Facial Patches (PSFP). First, in order to detect the facial landmarks from profile face images, the tree-structured part model is used for pose-free landmark localization; this approach excels at detecting facial landmarks and estimating head poses. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks, while avoiding overlap with each other or going beyond the range of the actual face. For the purpose of analyzing the recognition performance of PSFP, three classical approaches for local feature extraction-histogram of oriented Gradients (HOG), local binary pattern (LBP), and Gabor were applied to extract profile facial expression features. Experimental results on radboud faces database show that PSFP with HOG features can achieve higher accuracies under the most head rotations.


Author(s):  
Weidong Yin ◽  
Ziwei Liu ◽  
Chen Change Loy

We address the problem of instance-level facial attribute transfer without paired training data, e.g., faithfully transferring the exact mustache from a source face to a target face. This is a more challenging task than the conventional semantic-level attribute transfer, which only preserves the generic attribute style instead of instance-level traits. We propose the use of geometry-aware flow, which serves as a wellsuited representation for modeling the transformation between instance-level facial attributes. Specifically, we leverage the facial landmarks as the geometric guidance to learn the differentiable flows automatically, despite of the large pose gap existed. Geometry-aware flow is able to warp the source face attribute into the target face context and generate a warp-and-blend result. To compensate for the potential appearance gap between source and target faces, we propose a hallucination sub-network that produces an appearance residual to further refine the warp-and-blend result. Finally, a cycle-consistency framework consisting of both attribute transfer module and attribute removal module is designed, so that abundant unpaired face images can be used as training data. Extensive evaluations validate the capability of our approach in transferring instance-level facial attributes faithfully across large pose and appearance gaps. Thanks to the flow representation, our approach can readily be applied to generate realistic details on high-resolution images1.


2021 ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

Abstract Methods using salient facial patches (SFP) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and do not consider variations of head position. In our view, SFP can also be a good choice to recognize facial expression under different head rotations, and thus we propose an algorithm for this purpose, called Profile Salient Facial Patches (PSFP). First, in order to detect the facial landmarks from profile face images, the tree-structured part model is used for pose-free landmark localization; this approach excels at detecting facial landmarks and estimating head poses. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks, while avoiding overlap with each other or going beyond the range of the actual face. For the purpose of analyzing the recognition performance of PSFP, three classical approaches for local feature extraction-histogram of oriented Gradients (HOG), local binary pattern (LBP), and Gabor were applied to extract profile facial expression features. Experimental results on radboud faces database show that PSFP with HOG features can achieve higher accuracies under the most head rotations.


2019 ◽  
Vol 2019 (5) ◽  
pp. 528-1-528-6
Author(s):  
Xinwei Liu ◽  
Christophe Charrier ◽  
Marius Pedersen ◽  
Patrick Bours

Author(s):  
Yuchun Yan ◽  
Hayan Choi ◽  
Hyeon-Jeong Suk

It is difficult to describe facial skin color through a solid color as it varies from region to region. In this article, the authors utilized image analysis to identify the facial color representative region. A total of 1052 female images from Humanae project were selected as a solid color was generated for each image as their representative skin colors by the photographer. Using the open CV-based libraries, such as EOS of Surrey Face Models and DeepFace, 3448 facial landmarks together with gender and race information were detected. For an illustrative and intuitive analysis, they then re-defined 27 visually important sub-regions to cluster the landmarks. The 27 sub-region colors for each image were finally derived and recorded in L ∗ , a ∗ , and b ∗ . By estimating the color difference among representative color and 27 sub-regions, we discovered that sub-regions of below lips (low Labial) and central cheeks (upper Buccal) were the most representative regions across four major ethnicity groups. In future study, the methodology is expected to be applied for more image sources.


2014 ◽  
Vol 1 (3) ◽  
pp. 23-31
Author(s):  
Basava Raju ◽  
◽  
K. Y. Rama Devi ◽  
P. V. Kumar ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document