scholarly journals Multimodal Low Resolution Face and Frontal Gait Recognition from Surveillance Video

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.

2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


Author(s):  
ROOPA R ◽  
MRS. VANI.K. S ◽  
MRS. NAGAVENI. V

Image Processing is any form of signal processing for which the image is an input such as a photograph or video frame. The output of image processing may be either an image or a set of characteristics or parameters related to the image. In many facial analysis systems like Face Recognition face is used as an important biometric. Facial analysis systems need High Resolution images for their processing. The video obtained from inexpensive surveillance cameras are of poor quality. Processing of poor quality images leads to unexpected results. To detect face images from a video captured by inexpensive surveillance cameras, we will use AdaBoost algorithm. If we feed those detected face images having low resolution and low quality to face recognition systems they will produce some unstable and erroneous results. Because these systems have problem working with low resolution images. Hence we need a method to bridge the gap between on one hand low- resolution and low-quality images and on the other hand facial analysis systems. Our approach is to use a Reconstruction Based Super Resolution method. In Reconstruction Based Super Resolution method we will generate a face-log containing images of similar frontal faces of the highest possible quality using head pose estimation technique. Then, we use a Learning Based Super-Resolution algorithm applied to the result of the reconstruction-based part to improve the quality by another factor of two. Hence the total system quality factor will be improved by four.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
G. Merlin Linda ◽  
N.V.S. Sree Rathna Lakshmi ◽  
N. Senthil Murugan ◽  
Rajendra Prasad Mahapatra ◽  
V. Muthukumaran ◽  
...  

PurposeThe paper aims to introduce an intelligent recognition system for viewpoint variations of gait and speech. It proposes a convolutional neural network-based capsule network (CNN-CapsNet) model and outlining the performance of the system in recognition of gait and speech variations. The proposed intelligent system mainly focuses on relative spatial hierarchies between gait features in the entities of the image due to translational invariances in sub-sampling and speech variations.Design/methodology/approachThis proposed work CNN-CapsNet is mainly used for automatic learning of feature representations based on CNN and used capsule vectors as neurons to encode all the spatial information of an image by adapting equal variances to change in viewpoint. The proposed study will resolve the discrepancies caused by cofactors and gait recognition between opinions based on a model of CNN-CapsNet.FindingsThis research work provides recognition of signal, biometric-based gait recognition and sound/speech analysis. Empirical evaluations are conducted on three aspects of scenarios, namely fixed-view, cross-view and multi-view conditions. The main parameters for recognition of gait are speed, change in clothes, subjects walking with carrying object and intensity of light.Research limitations/implicationsThe proposed CNN-CapsNet has some limitations when considering for detecting the walking targets from surveillance videos considering multimodal fusion approaches using hardware sensor devices. It can also act as a pre-requisite tool to analyze, identify, detect and verify the malware practices.Practical implicationsThis research work includes for detecting the walking targets from surveillance videos considering multimodal fusion approaches using hardware sensor devices. It can also act as a pre-requisite tool to analyze, identify, detect and verify the malware practices.Originality/valueThis proposed research work proves to be performing better for the recognition of gait and speech when compared with other techniques.


Author(s):  
Prasad A. Jagdale ◽  
Sudeep D. Thepade

Nowadays the system which holds private and confidential data are being protected using biometric password such as finger recognition, voice recognition, eyries and face recognition. Face recognition match the current user face with faces present in the database of that security system and it has one major drawback that it never works better if it doesn’t have liveness detection. These face recognition system can be spoofed using various traits. Spoofing is accessing a system software or data by harming the biometric recognition security system. These biometric systems can be easily attacked by spoofs like peoples face images, masks and videos which are easily available from social media. The proposed work mainly focused on detecting the spoofing attack by training the system. Spoofing methods like photo, mask or video image can be easily identified by this method. This paper proposed a fusion technique where different features of an image are combining together so that it can give best accuracy in terms of distinguish between spoof and live face. Also a comparative study is done of machine learning classifiers to find out which classifiers gives best accuracy.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Zhifei Wang ◽  
Zhenjiang Miao ◽  
Yanli Wan ◽  
Zhen Tang

Low resolution (LR) in face recognition (FR) surveillance applications will cause the problem of dimensional mismatch between LR image and its high-resolution (HR) template. In this paper, a novel method called kernel coupled cross-regression (KCCR) is proposed to deal with this problem. Instead of processing in the original observing space directly, KCCR projects LR and HR face images into a unified nonlinear embedding feature space using kernel coupled mappings and graph embedding. Spectral regression is further employed to improve the generalization performance and reduce the time complexity. Meanwhile, cross-regression is developed to fully utilize the HR embedding to increase the information of the LR space, thus to improve the recognition performance. Experiments on the FERET and CMU PIE face database show that KCCR outperforms the existing structure-based methods in terms of recognition rate as well as time complexity.


2017 ◽  
Vol 17 (5) ◽  
pp. 995-1009
Author(s):  
Pramesti Festa Perdananti ◽  
Bambang Hidayat ◽  
Suci Aulia

Author(s):  
Shan Xue ◽  
Hong Zhu

In video surveillance, the captured face images are usually suffered from low-resolution (LR), besides, not all the probe images have mates in the gallery under the premise that only a single frontal high-resolution (HR) face image per subject. To address this problem, a novel face recognition framework called recursive label propagation based on statistical classification (ReLPBSC) has been proposed in this paper. Firstly, we employ VGG to extract robust discriminative feature vectors to represent each face. Then we select the corresponding LR face in the probe for each HR gallery face by similarity. Based on the picked HR–LR pairs, ReLPBSC is implemented for recognition. The main contributions of the proposed approach are as follows: (i) Inspired by substantial achievements of deep learning methods, VGG is adopted to achieve discriminative representation for LR faces to avoid the super-resolution steps; (ii) the accepted and rejected threshold parameters, which are not fixed in face recognition, can be achieved with ReLPBSC adaptively; (iii) the unreliable subjects never enrolled in the gallery can be rejected automatically with designed methods. Experimental results in [Formula: see text] pixels resolution show that the proposed method can achieve 86.64% recall rate while keeping 100% precision.


Author(s):  
Widodo Budiharto

The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals. Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, this research proposes an online face recognition system using improved PCA for a service robot in indoor environment based on stereo vision. Tested images are improved by generating random values for varying the intensity of face images. A program for online training is also developed where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face database’s as 95.4% and Indian face database’s as 72%. The results from this experiment are still evaluated to be improved in the future.


2020 ◽  
Author(s):  
Bilal Salih Abed Alhayani ◽  
Milind Rane

A wide variety of systems require reliable person recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that only a legitimate user and no one else access the rendered services. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. Face can be used as Biometrics for person verification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. We treats face recognition as a two-dimensional recognition problem. A well-known technique of Principal Component Analysis (PCA) is used for face recognition. Face images are projected onto a face space that encodes best variation among known face images. The face space is defined by Eigen face which are eigenvectors of the set of faces, which may not correspond to general facial features such as eyes, nose, lips. The system performs by projecting pre extracted face image onto a set of face space that represent significant variations among known face images. The variable reducing theory of PCA accounts for the smaller face space than the training set of face. A Multire solution features based pattern recognition system used for face recognition based on the combination of Radon and wavelet transforms. As the Radon transform is in-variant to rotation and a Wavelet Transform provides the multiple resolution. This technique is robust for face recognition. The technique computes Radon projections in different orientations and captures the directional features of face images. Further, the wavelet transform applied on Radon space provides multire solution features of the facial images. Being the line integral, Radon transform improves the low-frequency components that are useful in face recognition


Sign in / Sign up

Export Citation Format

Share Document