scholarly journals Contribution of shape and surface reflectance information to kinship detection in 3D face images

2019 ◽  
Author(s):  
Vanessa Fasolt ◽  
Iris Jasmin Holzleitner ◽  
Anthony J Lee ◽  
Kieran J. O'Shea ◽  
Lisa Marie DeBruine

Previous research has established that humans are able to detect kinship among strangers from facial images alone. The current study investigated what facial information is used for making those kinship judgments, specifically the contribution of face shape and surface reflectance information (e.g., skin texture, tone, eye and eyebrow colour). Using 3D facial images, 195 participants were asked to judge the relatedness of one hundred child pairs, half of which were related and half of which were unrelated. Participants were randomly assigned to judge one of three stimulus versions: face images with both surface reflectance and shape information present (reflectance and shape version), face images with shape information removed but surface reflectance present (reflectance version) or face images with surface reflectance information removed but shape present (shape version). Using binomial logistic mixed models, we found that participants were able to detect relatedness at levels above chance for all three stimulus versions. Overall, both individual shape and surface reflectance information contribute to kinship detection, and both cues are optimally combined when presented together.

2019 ◽  
Vol 19 (12) ◽  
pp. 9 ◽  
Author(s):  
Vanessa Fasolt ◽  
Iris J. Holzleitner ◽  
Anthony J. Lee ◽  
Kieran J. O'Shea ◽  
Lisa M. DeBruine

i-Perception ◽  
2017 ◽  
Vol 8 (5) ◽  
pp. 204166951773111 ◽  
Author(s):  
Annabelle S. Redfern ◽  
Christopher P. Benton

We used highly variable, so-called ‘ambient’ images to test whether expressions affect the identity recognition of real-world facial images. Using movie segments of two actors unknown to our participants, we created image pairs – each image within a pair being captured from the same film segment. This ensured that, within pairs, variables such as lighting were constant whilst expressiveness differed. We created two packs of cards, one containing neutral face images, the other, their expressive counterparts. Participants sorted the card packs into piles, one for each perceived identity. As with previous studies, the perceived number of identities was higher than the veridical number of two. Interestingly, when looking within piles, we found a strong difference between the expressive and neutral sorting tasks. With expressive faces, identity piles were significantly more likely to contain cards of both identities. This finding demonstrates that, over and above other image variables, expressiveness variability can cause identity confusion; evidently, expression is not disregarded or factored out when we classify facial identity in real-world images. Our results provide clear support for a face processing architecture in which both invariant and changeable facial information may be drawn upon to drive our decisions of identity.


2010 ◽  
Vol 8 (6) ◽  
pp. 409-409
Author(s):  
F. Jiang ◽  
L. Dricot ◽  
V. Blanz ◽  
R. Goebel ◽  
B. Rossion

2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


Author(s):  
Cuican Yu ◽  
Zihui Zhang ◽  
Huibin Li ◽  
Jian Sun ◽  
Zongben Xu

Author(s):  
Stefano Berretti ◽  
Alberto Del Bimbo ◽  
Pietro Pala

In this paper, an original hybrid 2D-3D face recognition approach is proposed using two orthogonal face images, frontal and side views of the face, to reconstruct the complete 3D geometry of the face. This is obtained using a model based solution, in which a 3D template face model is morphed according to the correspondence of a limited set of control points identified on the frontal and side images in addition to the model. Control points identification is driven by an Active Shape Model applied to the frontal image, whereas subsequent manual assistance is required for control points localization on the side view. The reconstructed 3D model is finally matched, using the iso-geodesic regions approach against a gallery of 3D face scans for the purpose of face recognition. Preliminary experimental results are provided on a small database showing the viability of the approach.


Sign in / Sign up

Export Citation Format

Share Document