scholarly journals Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 410 ◽  
Author(s):  
Dat Nguyen ◽  
Tuyen Pham ◽  
Min Lee ◽  
Kang Park

Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.

Sensors ◽  
2018 ◽  
Vol 18 (5) ◽  
pp. 1315 ◽  
Author(s):  
Dat Nguyen ◽  
Na Baek ◽  
Tuyen Pham ◽  
Kang Park

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2601 ◽  
Author(s):  
Dat Nguyen ◽  
Tuyen Pham ◽  
Young Lee ◽  
Kang Park

Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.


2020 ◽  
Vol 39 (3) ◽  
pp. 3769-3781
Author(s):  
Zhisong Han ◽  
Yaling Liang ◽  
Zengqun Chen ◽  
Zhiheng Zhou

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1810
Author(s):  
Dat Tien Nguyen ◽  
Tuyen Danh Pham ◽  
Ganbayar Batchuluun ◽  
Kyoung Jun Noh ◽  
Kang Ryoung Park

Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.


2021 ◽  
Author(s):  
Akhilesh Verma ◽  
Anshadha Gupta ◽  
Mohammad Akbar ◽  
Arun Kumar Yadav ◽  
Divakar Yadav

Abstract The fingerprint presentation attack is still a major challenge in biometric systems due to its increased applications worldwide. In the past, researchers used Fingerprint Presentation Attack Detection (FPAD) for user authentication, but it suffers from reliable authentication due to less focus on reducing the ‘error rate’. In this paper, we proposed an algorithm, based on referential image quality (RIQ)-metrics and minutiae count using neural network, k-NN and SVM for FPAD. We evaluate and validate the error rate reduction with different machine learning models on the public domain, such as LivDet crossmatch dataset2015 and achieved an accuracy of 88% with a neural network, 88.6% with k-NN and 88.8% using SVM. In addition, the average classification error (ACE) score is 0.1197 for ANN, 0.1138 for k-NN and 0.1117 for SVM. Thus, the results obtained show that it was achieved a reasonable accuracy with a low ACE score with respect to other state-of-the-art methods.


Author(s):  
Feng Shan ◽  
◽  
Hui Sun ◽  
Xiaoyun Tang ◽  
Weiwei Shi ◽  
...  

Digital instruments are widely used in industrial control, traffic, equipment displays and other fields because of the intuitive characteristic of their test data. Aiming at the character recognition scene of digital display Vernier caliper, this paper creatively proposes an intelligent instrument recognition system based on multi-step convolution neural network (CNN). Firstly, the image smples are collected from the Vernier caliper test site, and their resolution and size are normalized. Then the CNN model was established to train the image smples and extract the features. The digital display region in the image smples were extracted according to the image features, and the numbers in the Vernier caliper were cut out. Finally, using the MINIST datas set of Vernier caliper is established, and the CNN model is used to recognize it. The test results show that the overall recognition rate of the proposed CNN model is more than 95%, and has good robustness and generalization ability.


Sensors ◽  
2016 ◽  
Vol 16 (12) ◽  
pp. 2160 ◽  
Author(s):  
Husan Vokhidov ◽  
Hyung Hong ◽  
Jin Kang ◽  
Toan Hoang ◽  
Kang Park

Author(s):  
Anjith George ◽  
Zohreh Mostaani ◽  
David Geissenbuhler ◽  
Olegs Nikisins ◽  
Andre Anjos ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document