Within-person variability can improve the identification of unfamiliar faces across changes in viewpoint

2021 ◽  
pp. 174702182110097
Author(s):  
Niamh Hunnisett ◽  
Simone Favelle

Unfamiliar face identification is concerningly error prone, especially across changes in viewing conditions. Within-person variability has been shown to improve matching performance for unfamiliar faces, but this has only been demonstrated using images of a front view. In this study, we test whether the advantage of within-person variability from front views extends to matching to target images of a face rotated in view. Participants completed either a simultaneous matching task (Experiment 1) or a sequential matching task (Experiment 2) in which they were tested on their ability to match the identity of a face shown in an array of either one or three ambient front-view images, with a target image shown in front, three-quarter, or profile view. While the effect was stronger in Experiment 2, we found a consistent pattern in match trials across both experiments in that there was a multiple image matching benefit for front, three-quarter, and profile-view targets. We found multiple image effects for match trials only, indicating that providing observers with multiple ambient images confers an advantage for recognising different images of the same identity but not for discriminating between images of different identities. Signal detection measures also indicate a multiple image advantage despite a more liberal response bias for multiple image trials. Our results show that within-person variability information for unfamiliar faces can be generalised across views and can provide insights into the initial processes involved in the representation of familiar faces.

Perception ◽  
10.1068/p3335 ◽  
2002 ◽  
Vol 31 (8) ◽  
pp. 985-994 ◽  
Author(s):  
Ruth Clutterbuck ◽  
Robert A Johnston

An experiment is reported in which participants matched complete images of unfamiliar, moderately familiar, and highly familiar faces with simultaneously presented images of internal and external features. Participants had to decide if the two images depicted same or different individuals. Matches to internal features were made faster to highly familiar faces than both to moderately familiar and to unfamiliar faces, and matches to moderately familiar faces were made faster than to unfamiliar faces. For external feature matches, this advantage was only found for “different” decision matches to highly familiar faces compared to unfamiliar faces. The results indicate that the differences in familiar and unfamiliar face processing are not the result of all-or-none effects, but seem to have a graded impact on matching performance. These findings extend the earlier work of Young et al (1985 Perception14 737–746), and we discuss the possibility of using the matching task as an indirect measure of face familiarity.


2018 ◽  
Author(s):  
Anna K Bobak ◽  
Viktoria Roumenova Mileva ◽  
Peter Hancock

The role of image colour in face identification has received little attention in research despite the importance of identifying people from photographs in identity documents (IDs). Here, in two experiments, we investigated whether colour congruency of two photographs shown side by side affects face matching accuracy. Participants were presented with two images from the Models Face Matching Test (Experiment 1) and a newly devised matching task incorporating female faces (Experiment 2) and asked to decide whether they show the same person, or two different people. The photographs were either both in colour, both in grayscale, or mixed (one in grayscale and one in colour). Participants were more likely to accept a pair of images as a “match”, i.e. same person, in the mixed condition, regardless of whether the identity of the pair was the same or not. This demonstrates a clear shift in bias between “congruent” colour conditions and the mixed trials. In addition, there was a small decline in accuracy in the mixed condition, relative to when the images were presented in colour. Our study provides the first evidence that the hue of document photographs matters for face matching performance. This finding has important implications for the design and regulation of photographic ID worldwide.


Perception ◽  
2020 ◽  
Vol 49 (3) ◽  
pp. 298-309
Author(s):  
David J. Robertson ◽  
Jet G. Sanders ◽  
Alice Towler ◽  
Robin S. S. Kramer ◽  
Josh Spowage ◽  
...  

Hyper-realistic face masks have been used as disguises in at least one border crossing and in numerous criminal cases. Experimental tests using these masks have shown that viewers accept them as real faces under a range of conditions. Here, we tested mask detection in a live identity verification task. Fifty-four visitors at the London Science Museum viewed a mask wearer at close range (2 m) as part of a mock passport check. They then answered a series of questions designed to assess mask detection, while the masked traveller was still in view. In the identity matching task, 8% of viewers accepted the mask as matching a real photo of someone else, and 82% accepted the match between masked person and masked photo. When asked if there was any reason to detain the traveller, only 13% of viewers mentioned a mask. A further 11% picked disguise from a list of suggested reasons. Even after reading about mask-related fraud, 10% of viewers judged that the traveller was not wearing a mask. Overall, mask detection was poor and was not predicted by unfamiliar face matching performance. We conclude that hyper-realistic face masks could go undetected during live identity checks.


2021 ◽  
Author(s):  
Taylor Diarmuid Gogan ◽  
Jennifer L Beaudry ◽  
Julian Oldmeadow

This study investigates whether variability in perceived trait judgements disrupts our ability to match unfamiliar faces. In this preregistered study, 174 participants completed a face matching task where they were asked to indicate whether two face images belonged to the same person or different people (17,748 total data points). Participants completed 51 match trials consisting of images of the same person that differed substantially on one trait (either trustworthiness, dominance, or attractiveness) with minimal differences in the alternate traits. Participants also completed 51 mismatch trials which contained two photos of similar-looking individuals. We hypothesised that participants would make more errors on match trials when images differed in terms of attractiveness ratings than those that differed on trustworthiness or dominance. Contrary to expectations, images that differed in terms of attractiveness were matched most accurately, and there was no relationship between the extent of attractiveness differences and accuracy. There was some evidence that differences in perceived dominance and, to a lesser extent, trustworthiness was associated with lower face matching performance. However, these relationships were not significant when alternate traits were accounted for. The findings of our study suggest that face matching performance is largely robust against variation in trait judgements. fi


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Lurong Shen ◽  
Xinsheng Huang ◽  
Yuzhuang Yan ◽  
Yongbin Zheng ◽  
Wanying Xu

Mutual information (MI) has been widely used in multisensor image matching, but it may lead to mismatch among images with messy background. However, additional prior information can be of great help in improving the matching performance. In this paper, a robust Bayesian estimated mutual information, named as BMI, for multisensor image matching is proposed. This method has been implemented by utilizing the gradient prior information, in which the prior is estimated by the kernel density estimate (KDE) method, and the likelihood is modeled according to the distance of orientations. To further improve the robustness, we restrict the matching within the regions where the corresponding pixels of template image are salient enough. Experiments on several groups of multisensor images show that the proposed method outperforms the standard MI in robustness and accuracy and is similar with Pluim’s method. However, our computation is far more cost saving.


Perception ◽  
2019 ◽  
Vol 48 (2) ◽  
pp. 175-184 ◽  
Author(s):  
Robin S. S. Kramer ◽  
Sophie Mohamed ◽  
Sarah C. Hardy

Matching two different images of an unfamiliar face is difficult, although we rely on this process every day when proving our identity. Although previous work with laboratory photosets has shown that performance is error-prone, few studies have focussed on how accurately people carry out this matching task using photographs taken from official forms of identification. In Experiment 1, participants matched high-resolution, colour face photos with current UK driving licence photos of the same group of people in a sorting task. Averaging 19 mistaken pairings out of 30, our results showed that this task was both difficult and error-prone. In Experiment 2, high-resolution photographs were paired with either driving licence or passport photographs in a typical pairwise matching paradigm. We found no difference in performance levels for the two types of ID image, with both producing unacceptable levels of accuracy (around 75%–79% correct). The current work benefits from increased ecological validity and provides a clear demonstration that these forms of official identification are ineffective and alternatives should be considered.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Ray-Hon Chang ◽  
Yean-Lu Chang

Abstract Background A systematic approach to treating glabella-radix deficiency is lacking, and the management of brow-tip aesthetic lines remains technically challenging. Objectives The authors describe implantation of a customized Gore-Tex prosthesis combined with primary augmentation rhinoplasty to address the glabella-radix deficiency. Methods Fifty Asian patients with glabella-radix deficiency who received implantation and primary augmentation rhinoplasty were retrospectively evaluated in an 8-year period. Patients were assigned to categories based on brow-tip contour lines and symmetry patterns, and implant dimensions were ascertained from the contour type and from simulated postoperative results. Results Eleven men and 39 women were included in the study; the mean patient age was 27.22 years, and mean follow-up was 22.8 months. Seven of the patients were assigned to the type I/Ia category, 24 to type II/IIa, and 19 to type III/IIIa. Forty-five patients were considered to have satisfactory surgical results, with curved, symmetric, and normally spaced brow-tip lines on front view and a smooth frontonasal transition on profile view. Complications occurred in 5 patients and included infection (1 patient), inadequate augmentation (2), and palpable margin folding of the Gore-Tex device (2). Conclusions Deformities of brow-tip contour lines coincide with glabella-radix deficiencies in terms of severity. Knowledge of the patterns of brow-tip lines, combined with postoperative image simulation, can help the surgeon design an appropriate glabella-radix prosthesis. When placed in conjunction with other augmentation rhinoplasty procedures, the glabella-radix implant yields sufficient, predictable nasal projection and a harmonious facial aesthetic. Level of Evidence: 4


2019 ◽  
Vol 34 (2) ◽  
pp. 237-244
Author(s):  
Alistair J Harvey ◽  
Danny A Tomlinson

Background: According to alcohol myopia theory, alcohol reduces cognitive resources and restricts the drinker’s attention to only the more prominent aspects of a visual scene. As human hairstyles are often salient and serve as important facial recognition cues, we consider whether alcohol restricts attention to this region of faces upon initial viewing. Aims: Participants with higher breath alcohol concentrations just prior to encoding a series of unfamiliar faces were expected to be poorer than more sober counterparts at recognising the internal but not external features of those faces at test. Methods: Drinkers in a nearby bar ( n=76) were breathalysed and then shown a sequence of 21 full face photos. After a filled five-minute retention interval they completed a facial recognition task requiring them to identify the full, internal or external region of each of these among a sequence of 21 previously unseen (part or whole) faces. Results: As predicted, higher breath concentrations were associated with poorer discrimination of internal but not external face regions. Conclusions: Our findings suggest that alcohol restricts unfamiliar face encoding by narrowing the scope of attention to the exterior region of unfamiliar faces. This has important implications for drunk eyewitness accuracy, though further investigation is needed to see if the effect is mediated by gender, hair length and face feature distinctiveness.


Perception ◽  
2018 ◽  
Vol 47 (4) ◽  
pp. 414-431 ◽  
Author(s):  
Robin S. S. Kramer ◽  
Michael G. Reynolds

Research has systematically examined how laboratory participants and real-world practitioners decide whether two face photographs show the same person or not using frontal images. In contrast, research has not examined face matching using profile images. In Experiment 1, we ask whether matching unfamiliar faces is easier with frontal compared with profile views. Participants completed the original, frontal version of the Glasgow Face Matching Test, and also an adapted version where all face pairs were presented in profile. There was no difference in performance across the two tasks, suggesting that both views were similarly useful for face matching. Experiments 2 and 3 examined whether matching unfamiliar faces is improved when both frontal and profile views are provided. We compared face matching accuracy when both a frontal and a profile image of each face were presented, with accuracy using each view alone. Surprisingly, we found no benefit when both views were presented together in either experiment. Overall, these results suggest that either frontal or profile views provide substantially overlapping information regarding identity or participants are unable to utilise both sources of information when making decisions. Each of these conclusions has important implications for face matching research and real-world identification development.


2020 ◽  
Vol 74 (3) ◽  
pp. 269-286
Author(s):  
Xiaomin Liu ◽  
Jun-Bao Li ◽  
Jeng-Shyang Pan ◽  
Shuo Wang ◽  
Xudong Lv ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document