face verification
Recently Published Documents


TOTAL DOCUMENTS

636
(FIVE YEARS 120)

H-INDEX

39
(FIVE YEARS 4)

2022 ◽  
Vol 25 (1) ◽  
pp. 1-28
Author(s):  
Le Qin ◽  
Fei Peng ◽  
Min Long ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

As face presentation attacks (PAs) are realistic threats for unattended face verification systems, face presentation attack detection (PAD) has been intensively investigated in past years, and the recent advances in face PAD have significantly reduced the success rate of such attacks. In this article, an empirical study on a novel and effective face impostor PA is made. In the proposed PA, a facial artifact is created by using the most vulnerable facial components, which are optimally selected based on the vulnerability analysis of different facial components to impostor PAs. An attacker can launch a face PA by presenting a facial artifact on his or her own real face. With a collected PA database containing various types of artifacts and presentation attack instruments (PAIs), the experimental results and analysis show that the proposed PA poses a more serious threat to face verification and PAD systems compared with the print, replay, and mask PAs. Moreover, the generalization ability of the proposed PA and the vulnerability analysis with regard to commercial systems are also investigated by evaluating unknown face verification and real-world PAD systems. It provides a new paradigm for the study of face PAs.


2021 ◽  
Author(s):  
Dinh Tan Nguyen ◽  
Cao Truong Tran ◽  
Trung Thanh Nguyen ◽  
Cao Bao Hoang ◽  
Van Phu Luu ◽  
...  

Author(s):  
Yu-Sheng Lin ◽  
Zhe-Yu Liu ◽  
Yu-An Chen ◽  
Yu-Siang Wang ◽  
Ya-Liang Chang ◽  
...  

We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.


Author(s):  
Ali Murtaza ◽  
Qamar Sarfraz ◽  
Syed Ahmed Afzal ◽  
Muhammad Ibrahim Syed ◽  
Khurram Khan ◽  
...  
Keyword(s):  

Author(s):  
Kazuya Kakizaki ◽  
Taiki Miyagawa ◽  
Inderjeet Singh ◽  
Jun Sakuma

Author(s):  
André Sobiecki ◽  
Julius van Dijk ◽  
Hidde Folkertsma ◽  
Alexandru Telea
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document