Influence of Motion on Face Recognition

2012 ◽  
Vol 110 (1) ◽  
pp. 133-143
Author(s):  
Natale S. Bonfiglio ◽  
Valentina Manfredi ◽  
Eliano Pessa

The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hoo Keat Wong ◽  
Alejandro J. Estudillo ◽  
Ian D. Stephen ◽  
David R. T. Keeble

AbstractIt is widely accepted that holistic processing is important for face perception. However, it remains unclear whether the other-race effect (ORE) (i.e. superior recognition for own-race faces) arises from reduced holistic processing of other-race faces. To address this issue, we adopted a cross-cultural design where Malaysian Chinese, African, European Caucasian and Australian Caucasian participants performed four different tasks: (1) yes–no face recognition, (2) composite, (3) whole-part and (4) global–local tasks. Each face task was completed with unfamiliar own- and other-race faces. Results showed a pronounced ORE in the face recognition task. Both composite-face and whole-part effects were found; however, these holistic effects did not appear to be stronger for other-race faces than for own-race faces. In the global–local task, Malaysian Chinese and African participants demonstrated a stronger global processing bias compared to both European- and Australian-Caucasian participants. Importantly, we found little or no cross-task correlation between any of the holistic processing measures and face recognition ability. Overall, our findings cast doubt on the prevailing account that the ORE in face recognition is due to reduced holistic processing in other-race faces. Further studies should adopt an interactionist approach taking into account cultural, motivational, and socio-cognitive factors.


Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.


2016 ◽  
Author(s):  
Anya Chakraborty ◽  
Bhismadev Chakrabarti

AbstractWe live in an age of ‘selfies’. Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if visual processing of self-faces is different from other faces, using psychophysics and eye-tracking. Specifically, the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition was tested. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look at lower part of the face for longer duration for self-face compared to other-face. Participants with a reduced overlap between self and other face representations, as indexed by a steeper slope of the psychometric response curve for self-face recognition, spent a greater proportion of time looking at the upper regions of faces identified as self. Additionally, the association of autism-related traits with self-face processing metrics was tested, since autism has previously been associated with atypical self-processing, particularly in the psychological domain. Autistic traits were associated with reduced looking time to both self and other faces. However, no self-face specific association was noted with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.


Author(s):  
João Baptista Cardia ◽  
Aparecido Nilceu Marana

Many situations of our everyday life require our identification. Biometrics-based methods, besides allowing such identification, can help to prevent frauds. Among several biometrics features, face is one of the most popular due to its intrinsic and important properties, such as universality, acceptability, lowcosts, and covert identification. On the other hand, the traditional automatic face recognition methods based on 2D features can not properly deal with some very frequent challenges, such as occlusion, illumination and pose variations. In this paper we propose a new method for face recognition based on the fusion of 3D low-level local features, ACDN+P and 3DLBP, using depth images captured by cheap Kinect V1 sensors. In order to improve the low quality of the point cloud provided by such devices, Symmetric Filling, Iterative Closest Point, and Savitzky-Golay Filter are used in the preprocessing stage of the proposed method. Experimental results obtained on EURECOM Kinect dataset showed that the proposed method can improve the face recognition rates.


2001 ◽  
Vol 15 (4) ◽  
pp. 275-285 ◽  
Author(s):  
Melissa S. James ◽  
Stuart J. Johnstone ◽  
William G. Hayward

Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.


Perception ◽  
10.1068/p5779 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1368-1374 ◽  
Author(s):  
Richard Russell ◽  
Pawan Sinha

The face recognition task we perform most often in everyday experience is the identification of people with whom we are familiar. However, because of logistical challenges, most studies focus on unfamiliar-face recognition, wherein subjects are asked to match or remember images of unfamiliar people's faces. Here we explore the importance of two facial attributes—shape and surface reflectance—in the context of a familiar-face recognition task. In our experiment, subjects were asked to recognise color images of the faces of their friends. The images were manipulated such that only reflectance or only shape information was useful for recognizing any particular face. Subjects were actually better at recognizing their friends' faces from reflectance information than from shape information. This provides evidence that reflectance information is important for face recognition in ecologically relevant contexts.


2009 ◽  
Vol 21 (4) ◽  
pp. 625-641 ◽  
Author(s):  
Jürgen M. Kaufmann ◽  
Stefan R. Schweinberger ◽  
A. Mike Burton

We used ERPs to investigate neural correlates of face learning. At learning, participants viewed video clips of unfamiliar people, which were presented either with or without voices providing semantic information. In a subsequent face-recognition task (four trial blocks), learned faces were repeated once per block and presented interspersed with novel faces. To disentangle face from image learning, we used different images for face repetitions. Block effects demonstrated that engaging in the face-recognition task modulated ERPs between 170 and 900 msec poststimulus onset for learned and novel faces. In addition, multiple repetitions of different exemplars of learned faces elicited an increased bilateral N250. Source localizations of this N250 for learned faces suggested activity in fusiform gyrus, similar to that found previously for N250r in repetition priming paradigms [Schweinberger, S. R., Pickering, E. C., Jentzsch, I., Burton, A. M., & Kaufmann, J. M. Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398–409, 2002]. Multiple repetitions of learned faces also elicited increased central–parietal positivity between 400 and 600 msec and caused a bilateral increase of inferior–temporal negativity (>300 msec) compared with novel faces. Semantic information at learning enhanced recognition rates. Faces that had been learned with semantic information elicited somewhat less negative amplitudes between 700 and 900 msec over left inferior–temporal sites. Overall, the findings demonstrate a role of the temporal N250 ERP in the acquisition of new face representations across different images. They also suggest that, compared with visual presentation alone, additional semantic information at learning facilitates postperceptual processing in recognition but does not facilitate perceptual analysis of learned faces.


2021 ◽  
pp. 1-14
Author(s):  
N Kavitha ◽  
K Ruba Soundar ◽  
T Sathis Kumar

In recent years, the Face recognition task has been an active research area in computer vision and biometrics. Many feature extraction and classification algorithms are proposed to perform face recognition. However, the former usually suffer from the wide variations in face images, while the latter usually discard the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the Key points Local Binary/Tetra Pattern (KP-LTrP) and Improved Hough Transform (IHT) with the Improved DragonFly Algorithm-Kernel Ensemble Learning Machine (IDFA-KELM) is proposed to address the face recognition problem in unconstrained conditions. Initially, the face images are collected from the publicly available dataset. Then noises in the input image are removed by performing preprocessing using Adaptive Kuwahara filter (AKF). After preprocessing, the face from the preprocessed image is detected using the Tree-Structured Part Model (TSPM) structure. Then, features, such as KP-LTrP, and IHT are extracted from the detected face and the extracted feature is reduced using the Information gain based Kernel Principal Component Analysis (IG-KPCA) algorithm. Then, finally, these reduced features are inputted to IDFA-KELM for performing FR. The outcomes of the proposed method are examined and contrasted with the other existing techniques to confirm that the proposed IDFA-KELM detects human faces efficiently from the input images.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 944
Author(s):  
Stefano Pini ◽  
Guido Borghi ◽  
Roberto Vezzani ◽  
Davide Maltoni ◽  
Rita Cucchiara

Nowadays, we are witnessing the wide diffusion of active depth sensors. However, the generalization capabilities and performance of the deep face recognition approaches that are based on depth data are hindered by the different sensor technologies and the currently available depth-based datasets, which are limited in size and acquired through the same device. In this paper, we present an analysis on the use of depth maps, as obtained by active depth sensors and deep neural architectures for the face recognition task. We compare different depth data representations (depth and normal images, voxels, point clouds), deep models (two-dimensional and three-dimensional Convolutional Neural Networks, PointNet-based networks), and pre-processing and normalization techniques in order to determine the configuration that maximizes the recognition accuracy and is capable of generalizing better on unseen data and novel acquisition settings. Extensive intra- and cross-dataset experiments, which were performed on four public databases, suggest that representations and methods that are based on normal images and point clouds perform and generalize better than other 2D and 3D alternatives. Moreover, we propose a novel challenging dataset, namely MultiSFace, in order to specifically analyze the influence of the depth map quality and the acquisition distance on the face recognition accuracy.


Author(s):  
C. Ratanaubol ◽  
P. Wannapiroon ◽  
P. Nilsook

Face recognition technology is widely used in applications. But in some activities it may be too difficult to install the device and the registration boot. That requires both manpower and time, such as enrolling students to attend university activities. If you will use the face scanning system, one by one will waste a lot of time. The other method. It may be easy to falsify. Using digital imagery in student participation to solve problems by developing a system that can detect participants' faces in digital photographs obtained by taking still images and videos from several photographers. And collecting detailed pictures and videos throughout the event it is a digital proof to find the participants to verify their faces match with any student in the database. Who participate in that activity, the system will have Finding and comparing data of pre-recorded students' photographs and the algorithm would checks for duplicate data and records the activity in the database. Where users can specify category or activity name for later inspection


Sign in / Sign up

Export Citation Format

Share Document