Human face modeling for facial image synthesis using optimization-based adaptation

Author(s):  
Yu Zhang ◽  
Terence Sim ◽  
Chew Lim Tan
2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Tanasai Sucontphunt

This paper describes a practical technique for 3D artistic face modeling where a human identity can be inserted into a 3D artistic face. This approach can automatically extract the human identity from a 3D human face model and then transfer it to a 3D artistic face model in a controllable manner. Its core idea is to construct a face geometry space and a face texture space based on a precollected 3D face dataset. Then, these spaces are used to extract and blend the face models together based on their facial identities and styles. This approach can enable a novice user to interactively generate various artistic faces quickly using a slider control. Also, it can run in real-time on an off-the-shelf computer without GPU acceleration. This approach can be broadly used in various 3D artistic face modeling applications such as a rapid creation of a cartoon crowd with different cartoon characters.


2020 ◽  
Vol 11 (1) ◽  
pp. 17-26 ◽  
Author(s):  
Adel Alti

Existing methods of face emotion recognition have been limited in performance in terms of recognition accuracy and execution time. It is highly important to use efficient techniques for improving this performance. In this article, the authors present an automatic facial image retrieval combining the advantages of color normalization by texture estimators with the gradient vector. Starting from a query face image, an efficient algorithm for human face by hybrid feature extraction provides very interesting results.


2013 ◽  
Vol 273 ◽  
pp. 796-799
Author(s):  
Yong Sheng Wang

This paper presents a novel approach to model 3D human face from multiple view 2D images in a fast mode. Our proposed method mainly includes three steps: 1) Face Recognition from 2D images, 2) Converting 2D images to 3D images, 3) Modeling 3D human face. To extract visual features of both 2D and 3D images, visual features adopted in 3D are described by Point Signature, and visual features utilized in 2D is represented by Gabor filter responses. Afterwards, 3D model is obtained by combining multiple view 2D images through calculating projections vector and translation vector. Experimental results show that our method can model 3D human face with high accuracy and efficiency.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Osman Boyaci ◽  
Erchin Serpedin ◽  
Mitchell A. Stotland

AbstractWhat is a normal face? A fundamental task for the facial reconstructive surgeon is to answer that question as it pertains to any given individual. Accordingly, it would be important to be able to place the facial appearance of a patient with congenital or acquired deformity numerically along their own continuum of normality, and to measure any surgical changes against such a personalized benchmark. This has not previously been possible. We have solved this problem by designing a computerized model that produces realistic, normalized versions of any given facial image, and objectively measures the perceptual distance between the raw and normalized facial image pair. The model is able to faithfully predict human scoring of facial normality. We believe this work represents a paradigm shift in the assessment of the human face, holding great promise for development as an objective tool for surgical planning, patient education, and as a means for clinical outcome measurement.


Author(s):  
Guozhu Peng ◽  
Shangfei Wang

Current works on facial action unit (AU) recognition typically require fully AU-labeled training samples. To reduce the reliance on time-consuming manual AU annotations, we propose a novel semi-supervised AU recognition method leveraging two kinds of readily available auxiliary information. The method leverages the dependencies between AUs and expressions as well as the dependencies among AUs, which are caused by facial anatomy and therefore embedded in all facial images, independent on their AU annotation status. The other auxiliary information is facial image synthesis given AUs, the dual task of AU recognition from facial images, and therefore has intrinsic probabilistic connections with AU recognition, regardless of AU annotations. Specifically, we propose a dual semi-supervised generative adversarial network for AU recognition from partially AU-labeled and fully expressionlabeled facial images. The proposed network consists of an AU classifier C, an image generator G, and a discriminator D. In addition to minimize the supervised losses of the AU classifier and the face generator for labeled training data, we explore the probabilistic duality between the tasks using adversary learning to force the convergence of the face-AU-expression tuples generated from the AU classifier and the face generator, and the ground-truth distribution in labeled data for all training data. This joint distribution also includes the inherent AU dependencies. Furthermore, we reconstruct the facial image using the output of the AU classifier as the input of the face generator, and create AU labels by feeding the output of the face generator to the AU classifier. We minimize reconstruction losses for all training data, thus exploiting the informative feedback provided by the dual tasks. Within-database and cross-database experiments on three benchmark databases demonstrate the superiority of our method in both AU recognition and face synthesis compared to state-of-the-art works.


Sign in / Sign up

Export Citation Format

Share Document