presentation attack detection
Recently Published Documents


TOTAL DOCUMENTS

286
(FIVE YEARS 200)

H-INDEX

16
(FIVE YEARS 7)

2022 ◽  
Vol 25 (1) ◽  
pp. 1-28
Author(s):  
Le Qin ◽  
Fei Peng ◽  
Min Long ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

As face presentation attacks (PAs) are realistic threats for unattended face verification systems, face presentation attack detection (PAD) has been intensively investigated in past years, and the recent advances in face PAD have significantly reduced the success rate of such attacks. In this article, an empirical study on a novel and effective face impostor PA is made. In the proposed PA, a facial artifact is created by using the most vulnerable facial components, which are optimally selected based on the vulnerability analysis of different facial components to impostor PAs. An attacker can launch a face PA by presenting a facial artifact on his or her own real face. With a collected PA database containing various types of artifacts and presentation attack instruments (PAIs), the experimental results and analysis show that the proposed PA poses a more serious threat to face verification and PAD systems compared with the print, replay, and mask PAs. Moreover, the generalization ability of the proposed PA and the vulnerability analysis with regard to commercial systems are also investigated by evaluating unknown face verification and real-world PAD systems. It provides a new paradigm for the study of face PAs.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7728
Author(s):  
Kacper Kubicki ◽  
Paweł Kapusta ◽  
Krzysztof Ślot

The presented paper is concerned with detection of presentation attacks against unsupervised remote biometric speaker verification, using a well-known challenge–response scheme. We propose a novel approach to convolutional phoneme classifier training, which ensures high phoneme recognition accuracy even for significantly simplified network architectures, thus enabling efficient utterance verification on resource-limited hardware, such as mobile phones or embedded devices. We consider Deep Convolutional Neural Networks operating on windows of speech Mel-Spectrograms as a means for phoneme recognition, and we show that one can boost the performance of highly simplified neural architectures by modifying the principle underlying training set construction. Instead of generating training examples by slicing spectrograms using a sliding window, as it is commonly done, we propose to maximize the consistency of phoneme-related spectrogram structures that are to be learned, by choosing only spectrogram chunks from the central regions of phoneme articulation intervals. This approach enables better utilization of the limited capacity of the considered simplified networks, as it significantly reduces a within-class data scatter. We show that neural architectures comprising as few as dozens of thousands parameters can successfully—with accuracy of up to 76%, solve the 39-phoneme recognition task (we use the English language TIMIT database for experimental verification of the method). We also show that ensembling of simple classifiers, using a basic bagging method, boosts the recognition accuracy by another 2–3%, offering Phoneme Error Rates at the level of 23%, which approaches the accuracy of the state-of-the-art deep neural architectures that are one to two orders of magnitude more complex than the proposed solution. This, in turn, enables executing reliable presentation attack detection, based on just few-syllable long challenges on highly resource-limited computing hardware.


2021 ◽  
pp. 103329
Author(s):  
Gunjan Gautam ◽  
Aditya Raj ◽  
Susanta Mukhopadhyay

2021 ◽  
Author(s):  
Jiong Wang ◽  
Zhou Zhao ◽  
Weike Jin ◽  
Xinyu Duan ◽  
Zhen Lei ◽  
...  

2021 ◽  
Author(s):  
Ajian Liu ◽  
Chenxu Zhao ◽  
Zitong Yu ◽  
Anyang Su ◽  
Xing Liu ◽  
...  

2021 ◽  
Author(s):  
Shen Chen ◽  
Taiping Yao ◽  
Keyue Zhang ◽  
Yang Chen ◽  
Ke Sun ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document