IMAGE BASED 3D FACE RECONSTRUCTION: A SURVEY

2009 ◽  
Vol 09 (02) ◽  
pp. 217-250 ◽  
Author(s):  
GEORGIOS STYLIANOU ◽  
ANDREAS LANITIS

The use of 3D data in face image processing applications has received considerable attention during the last few years. A major issue for the implementation of 3D face processing systems is the accurate and real time acquisition of 3D faces using low cost equipment. In this paper we provide a survey of 3D reconstruction methods used for generating the 3D appearance of a face using either a single or multiple 2D images captured with ordinary equipment such as digital cameras and camcorders. In this context we discuss various issues pertaining to the general problem of 3D face reconstruction such as the existence of suitable 3D face databases, correspondence of 3D faces, feature detection, deformable 3D models and typical assumptions used during the reconstruction process. Different approaches to the problem of 3D reconstruction are presented and for each category the most important advantages and disadvantages are outlined. In particular we describe example-based methods, stereo methods, video-based methods and silhouette-based methods. The issue of performance evaluation of 3D face reconstruction algorithms, the state of the art and future trends are also discussed.

2020 ◽  
pp. short15-1-short15-9
Author(s):  
Vladimir Kniaz ◽  
Vladimir Knyaz ◽  
Vladimir Mizginov

Reconstruction of face 3D shape and its texture is a challenging task in the modern anthropology. While a skilled anthropologist could reconstruct an appearance of a prehistoric human from its skull, there are no automated methods to date for automatic anthropological face 3D reconstruction and texturing. We propose a deep learning framework for synthesis and visualization of photorealistic textures for 3D face reconstruction of prehistoric human. Our framework leverages a joint face-skull model based on generative adversarial networks. Specifically, we train two image-to-image translation models to separate 3D face reconstruction and texturing. The first model translates an input depth map of a human skull to a possible depth map of its face and its semantic parts labeling. The second model, performs a multimodal translation of the generated semantic labeling to multiple photorealistic textures. We generate a dataset consisting of 3D models of human faces and skulls to train our 3D reconstruction model. The dataset includes paired samples obtained from computed tomography and unpaired samples representing 3D models of skulls of prehistoric human. We train our texture synthesis model on the CelebAMask-HQ dataset. We evaluate our model qualitatively and quantitatively to demonstrate that it provides robust 3D face reconstruction of prehistoric human with multimodal photorealistic texturing.


Author(s):  
Hafiz Muhammad Umair Munir ◽  
Waqar S. Qureshi

3D facial reconstruction is an emerging and interesting application in the field of computer graphics and computer vision. It is difficult and challenging to reconstruct the 3D facial model from a single photo because of arbitrary poses, non-uniform illumination, expressions, and occlusions. Detailed 3D facial models are difficult to reconstruct because every algorithm has some limitations related to profile view, fine detail, accuracy, and speed. The major problem is to develop 3D face with texture of large poses, wild faces, large training data, and occluded faces. Mostly algorithms use convolution neural networks and deep learning frameworks to create facial model. 3D face reconstruction algorithms used for application such as 3D printing, 3D VR games and facial recognition. Different issues, problems and their proposed solutions are discussed. Different facial dataset and facial 3DMM used for 3D face reconstructing from a single photo are explained. The recent state of art 3D facial reconstruction and 3D face learning methods developed in 2019 is briefly explained.


Author(s):  
Xingjuan Cai ◽  
Yihao Cao ◽  
Yeqing Ren ◽  
Zhihua Cui ◽  
Wensheng Zhang

2018 ◽  
Vol 37 (2) ◽  
pp. 523-550 ◽  
Author(s):  
M. Zollhöfer ◽  
J. Thies ◽  
P. Garrido ◽  
D. Bradley ◽  
T. Beeler ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document