Pose Registration of 3D Face Images

Author(s):  
Koushik Dutta ◽  
Debotosh Bhattacharjee ◽  
Mita Nasipuri
Keyword(s):  
3D Face ◽  
2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


Author(s):  
Stefano Berretti ◽  
Alberto Del Bimbo ◽  
Pietro Pala

In this paper, an original hybrid 2D-3D face recognition approach is proposed using two orthogonal face images, frontal and side views of the face, to reconstruct the complete 3D geometry of the face. This is obtained using a model based solution, in which a 3D template face model is morphed according to the correspondence of a limited set of control points identified on the frontal and side images in addition to the model. Control points identification is driven by an Active Shape Model applied to the frontal image, whereas subsequent manual assistance is required for control points localization on the side view. The reconstructed 3D model is finally matched, using the iso-geodesic regions approach against a gallery of 3D face scans for the purpose of face recognition. Preliminary experimental results are provided on a small database showing the viability of the approach.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


2019 ◽  
Author(s):  
Vanessa Fasolt ◽  
Iris Jasmin Holzleitner ◽  
Anthony J Lee ◽  
Kieran J. O'Shea ◽  
Lisa Marie DeBruine

Previous research has established that humans are able to detect kinship among strangers from facial images alone. The current study investigated what facial information is used for making those kinship judgments, specifically the contribution of face shape and surface reflectance information (e.g., skin texture, tone, eye and eyebrow colour). Using 3D facial images, 195 participants were asked to judge the relatedness of one hundred child pairs, half of which were related and half of which were unrelated. Participants were randomly assigned to judge one of three stimulus versions: face images with both surface reflectance and shape information present (reflectance and shape version), face images with shape information removed but surface reflectance present (reflectance version) or face images with surface reflectance information removed but shape present (shape version). Using binomial logistic mixed models, we found that participants were able to detect relatedness at levels above chance for all three stimulus versions. Overall, both individual shape and surface reflectance information contribute to kinship detection, and both cues are optimally combined when presented together.


Author(s):  
Tang-Tang Yi ◽  

In order to solve the problem of low recognition accuracy in recognition of 3D face images collected by traditional sensors, a face recognition algorithm for 3D point cloud collected by mixed image sensors is proposed. The algorithm first uses the 3D wheelbase to expand the face image edge. According to the 3D wheelbase, the noise of extended image is detected, and median filtering is used to eliminate the detected noise. Secondly, the priority of the boundary pixels to recognize the face image in the denoising image recognition process is determined, and the key parts such as the illuminance line are analyzed, so that the recognition of the 3D point cloud face image is completed. Experiments show that the proposed algorithm improves the recognition accuracy of 3D face images, which recognition time is lower than that of the traditional algorithm by about 4 times, and the recognition efficiency is high.


Sign in / Sign up

Export Citation Format

Share Document