scholarly journals MagNet: Detecting Digital Presentation Attacks on Face Recognition

2021 ◽  
Vol 4 ◽  
Author(s):  
Akshay Agarwal ◽  
Richa Singh ◽  
Mayank Vatsa ◽  
Afzel Noore

Presentation attacks on face recognition systems are classified into two categories: physical and digital. While much research has focused on physical attacks such as photo, replay, and mask attacks, digital attacks such as morphing have received limited attention. With the advancements in deep learning and computer vision algorithms, several easy-to-use applications are available where with few taps/clicks, an image can be easily and seamlessly altered. Moreover, generation of synthetic images or modifying images/videos (e.g. creating deepfakes) is relatively easy and highly effective due to the tremendous improvement in generative machine learning models. Many of these techniques can be used to attack the face recognition systems. To address this potential security risk, in this research, we present a novel algorithm for digital presentation attack detection, termed as MagNet, using a “Weighted Local Magnitude Pattern” (WLMP) feature descriptor. We also present a database, termed as IDAgender, which consists of three different subsets of swapping/morphing and neural face transformation. In contrast to existing research, which utilizes sophisticated machine learning networks for attack generation, the databases in this research are prepared using social media platforms that are readily available to everyone with and without any malicious intent. Experiments on the proposed database, FaceForensic database, GAN generated images, and real-world images/videos show the stimulating performance of the proposed algorithm. Through the extensive experiments, it is observed that the proposed algorithm not only yields lower error rates, but also provides computational efficiency.

2018 ◽  
Vol 1 (3) ◽  
Author(s):  
Gustavo Botelho de Souza ◽  
Joao Paulo Papa ◽  
Aparecido Nilceu Marana

Biometrics emerged as a robust solution for security systems. Despite, nowadays criminals are developing techniques to accurately simulate biometric traits of valid users, process known as spoofing attack, in order to circumvent the biometric applications. Face is among the main biometric characteristics, being extremely convenient for users given its non-intrusive capture by means of digital cameras. However, face recognition systems are the ones that most suffer with spoofing attacks since such cameras, in general, can be easily fooled with common printed photographs. In this sense, countermeasure techniques should be developed and integrated to the traditional face recognition systems in order to prevent such frauds. Among the main neural networks for face spoofing detection is the discriminative Restricted Boltzmann Machine (RBM) which, besides of efficiency, achieves great results in attack detection by learning the distributions of real and fake facial images. However, it is known that deeper neural networks present better accuracy results in many tasks. In this context, we propose a novel model called Deep Discriminative Restricted Boltzmann Machine (DDRBM) applied to face spoofing detection. Results on the NUAA dataset show a significative improvement in performance when compared to the accuracy rates of a traditional discriminative RBM on attack detection.


2020 ◽  
Author(s):  
Sushma Venkatesh ◽  
Raghavendra Ramachandra ◽  
kiran Raja ◽  
Luuk J. Spreeuwers ◽  
Raymond Veldhuis ◽  
...  

<p> Along with the deployment of the Face Recognition Systems</p> <p>(FRS), concerns were raised related to the vulnerability</p> <p>of those systems towards various attacks including morphed</p> <p>attacks. The morphed face attack involves two different</p> <p>face images in order to obtain via a morphing process</p> <p>a resulting attack image, which is sufficiently similar</p> <p>to both contributing data subjects. The obtained morphed</p> <p>image can successfully be verified against both subjects visually</p> <p>(by a human expert) and by a commercial FRS. The</p> <p>face morphing attack poses a severe security risk to the</p> <p>e-passport issuance process and to applications like border</p> <p>control, unless such attacks are detected and mitigated.</p> <p>In this work, we propose a new method to reliably detect</p> <p>a morphed face attack using a newly designed denoising</p> <p>framework. To this end, we design and introduce a new</p> <p>deep Multi-scale Context Aggregation Network (MS-CAN)</p> <p>to obtain denoised images, which is subsequently used to</p> <p>determine if an image is morphed or not. Extensive experiments</p> <p>are carried out on three different morphed face image</p> <p>datasets. The Morphing Attack Detection (MAD) performance</p> <p>of the proposed method is also benchmarked against</p> <p>14 different state-of-the-art techniques using the ISO-IEC</p> <p>30107-3 evaluation metrics. Based on the obtained quantitative</p> <p>results, the proposed method has indicated the best</p> <p>performance on all three datasets and also on cross-dataset</p> <p>experiments.</p>


2019 ◽  
Vol 16 (3) ◽  
pp. 387-403
Author(s):  
Milos Pavlovic ◽  
Branka Stojanovic ◽  
Ranko Petrovic ◽  
Snezana Puzovic ◽  
Srdjan Stankovic

The main problem for modern visible light face recognition has been accurate identification under variable environmental conditions. Thermal infrared facial images utilization in face recognition systems can provide a solution for problems related to uncontrolled environmental conditions, especially to those caused by illumination limitations. This paper compares the results of the use of visible light and thermal infrared imagery for face recognition based on the HOG feature descriptor. In particular, the paper suggests an optimal HOG cell to image size ratio in order to improve recognition accuracy and reduce computational complexity. Performance statistics are presented on facial images with different facial expressions. The obtained results support the conclusion that recognition with thermal infrared images is more robust and that fusion of sensors should be included for improving recognition accuracy.


Author(s):  
Roberto Tronci ◽  
Daniele Muntoni ◽  
Gianluca Fadda ◽  
Maurizio Pili ◽  
Nicola Sirena ◽  
...  

2020 ◽  
Author(s):  
Sushma Venkatesh ◽  
Raghavendra Ramachandra ◽  
kiran Raja ◽  
Luuk J. Spreeuwers ◽  
Raymond Veldhuis ◽  
...  

<p> Along with the deployment of the Face Recognition Systems</p> <p>(FRS), concerns were raised related to the vulnerability</p> <p>of those systems towards various attacks including morphed</p> <p>attacks. The morphed face attack involves two different</p> <p>face images in order to obtain via a morphing process</p> <p>a resulting attack image, which is sufficiently similar</p> <p>to both contributing data subjects. The obtained morphed</p> <p>image can successfully be verified against both subjects visually</p> <p>(by a human expert) and by a commercial FRS. The</p> <p>face morphing attack poses a severe security risk to the</p> <p>e-passport issuance process and to applications like border</p> <p>control, unless such attacks are detected and mitigated.</p> <p>In this work, we propose a new method to reliably detect</p> <p>a morphed face attack using a newly designed denoising</p> <p>framework. To this end, we design and introduce a new</p> <p>deep Multi-scale Context Aggregation Network (MS-CAN)</p> <p>to obtain denoised images, which is subsequently used to</p> <p>determine if an image is morphed or not. Extensive experiments</p> <p>are carried out on three different morphed face image</p> <p>datasets. The Morphing Attack Detection (MAD) performance</p> <p>of the proposed method is also benchmarked against</p> <p>14 different state-of-the-art techniques using the ISO-IEC</p> <p>30107-3 evaluation metrics. Based on the obtained quantitative</p> <p>results, the proposed method has indicated the best</p> <p>performance on all three datasets and also on cross-dataset</p> <p>experiments.</p>


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3988
Author(s):  
Marcin Kowalski

Face recognition systems face real challenges from various presentation attacks. New, more sophisticated methods of presentation attacks are becoming more difficult to detect using traditional face recognition systems. Thermal infrared imaging offers specific physical properties that may boost presentation attack detection capabilities. The aim of this paper is to present outcomes of investigations on the detection of various face presentation attacks in thermal infrared in various conditions including thermal heating of masks and various states of subjects. A thorough analysis of presentation attacks using printed and displayed facial photographs, 3D-printed, custom flexible 3D-latex and silicone masks is provided. The paper presents the intensity analysis of thermal energy distribution for specific facial landmarks during long-lasting experiments. Thermalization impact, as well as varying the subject’s state due to physical effort on presentation attack detection are investigated. A new thermal face spoofing dataset is introduced. Finally, a two-step deep learning-based method for the detection of presentation attacks is presented. Validation results of a set of deep learning methods across various presentation attack instruments are presented.


Sign in / Sign up

Export Citation Format

Share Document