scholarly journals The Role of Motion Models in Super-Resolving Surveillance Video for Face Recognition

Author(s):  
F. Lin ◽  
C. Fookes ◽  
V. Chandran ◽  
S. Sridharan
Perception ◽  
2021 ◽  
Vol 50 (2) ◽  
pp. 174-177
Author(s):  
Sarah Laurence ◽  
Jordyn Eyre ◽  
Ailsa Strathie

Expertise in familiar face recognition has been well-documented in several studies. Here, we examined the role of context using a surprise lecturer recognition test. Across two experiments, we found few students recognised their lecturer when they were unexpected, but accuracy was higher when the lecturer was preceded by a prompt. Our findings suggest that familiar face recognition can be poor in unexpected contexts.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


Perception ◽  
10.1068/p5192 ◽  
2005 ◽  
Vol 34 (9) ◽  
pp. 1117-1134 ◽  
Author(s):  
Claus-Christian Carbon ◽  
Helmut Leder

We investigated the early stages of face recognition and the role of featural and holistic face information. We exploited the fact that, on inversion, the alienating disorientation of the eyes and mouth in thatcherised faces is hardly detectable. This effect allows featural and holistic information to be dissociated and was used to test specific face-processing hypotheses. In inverted thatcherised faces, the cardinal features are already correctly oriented, whereas in undistorted faces, the whole Gestalt is coherent but all information is disoriented. Experiment 1 and experiment 3 revealed that, for inverted faces, featural information processing precedes holistic information. Moreover, the processing of contextual information is necessary to process local featural information within a short presentation time (26 ms). Furthermore, for upright faces, holistic information seems to be available faster than for inverted faces (experiment 2). These differences in processing inverted and upright faces presumably cause the differential importance of featural and holistic information for inverted and upright faces.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 699 ◽  
Author(s):  
Carmen Moret-Tatay ◽  
Inmaculada Baixauli-Fortea ◽  
M. Dolores Grau Sevilla ◽  
Tatiana Quarti Irigaray

Face recognition is located in the fusiform gyrus, which is also related to other tasks such word recognition. Although these two processes have several similarities, there are remarkable differences that include a vast range of approaches, which results from different groups of participants. This research aims to examine how the word-processing system processes faces at different moments and vice versa. Two experiments were carried out. Experiment 1 allowed us to examine the classical discrimination task, while Experiment 2 allowed us to examine very early moments of discrimination. In the first experiment, 20 Spanish University students volunteered to participate. Secondly, a sample of 60 participants from different nationalities volunteered to take part in Experiment 2. Furthermore, the role of sex and place of origin were considered in Experiment 1. No differences between men and women were found in Experiment 1, nor between conditions. However, Experiment 2 depicted shorter latencies for faces than word names, as well as a higher masked repetition priming effect for word identities and word names preceded by faces. Emerging methodologies in the field might help us to better understand the relationship among these two processes. For this reason, a network analysis approach was carried out, depicting sub-communities of nodes related to face or word name recognition, which were replicated across different groups of participants. Bootstrap inferences are proposed to account for variability in estimating the probabilities in the current samples. This supports that both processes are related to early moments of recognition, and rather than being independent, they might be bilaterally distributed with some expert specializations or preferences.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2018 ◽  
Vol 24 (6) ◽  
pp. 582-608 ◽  
Author(s):  
Fernando M. Ramírez

Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons—including neurons bimodally tuned to mirror-symmetric face-views—followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.


2017 ◽  
Vol 17 (5) ◽  
pp. 995-1009
Author(s):  
Pramesti Festa Perdananti ◽  
Bambang Hidayat ◽  
Suci Aulia

2006 ◽  
Vol 77 (2) ◽  
pp. 297-311 ◽  
Author(s):  
Chiara Turati ◽  
Viola Macchi Cassia ◽  
Francesca Simion ◽  
Irene Leo

Sign in / Sign up

Export Citation Format

Share Document