Generating full-view face images from a single-view image

Author(s):  
Lei Zhong ◽  
ChangMin Bai ◽  
Jianfeng Li
Keyword(s):  
2014 ◽  
Vol 981 ◽  
pp. 319-322
Author(s):  
Hai Bin Wu ◽  
Liang Tian ◽  
Bei Yi Wang ◽  
Chao Liu ◽  
Yan Wang ◽  
...  

Point cloud registration is necessary to acquire full-view data in coded SL three-dimensional measurement, based on single-view measured data. Aiming at surface feature of metal parts or human body, matching point pair construction principle, and transition matrix optimization method are analyzed. First, auxiliary-stereo-target registration principle and device are presented to establish matching point pair, and least-squares ill solution and iterative misconvergence caused by coplanar matching points can be avoided. Second, ICP method is adopted for acquiring transition matrix, and then mismatch point pair rejection method based on orthogonal Gray code principle is designed to increase iterative convergence. Experimental results show, registration error is about 0.8mm, close to that of global camera method and higher than that of surface method. This method has no influence on measured surface, and simplifies measurement device.


2019 ◽  
Vol 2019 (5) ◽  
pp. 528-1-528-6
Author(s):  
Xinwei Liu ◽  
Christophe Charrier ◽  
Marius Pedersen ◽  
Patrick Bours

2014 ◽  
Vol 1 (3) ◽  
pp. 23-31
Author(s):  
Basava Raju ◽  
◽  
K. Y. Rama Devi ◽  
P. V. Kumar ◽  
◽  
...  

2018 ◽  
Vol 30 (6) ◽  
pp. 1046
Author(s):  
Yuliang Sun ◽  
Yongwei Miao ◽  
Lijie Yu ◽  
Pajarola Renato
Keyword(s):  

2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


Sign in / Sign up

Export Citation Format

Share Document