scholarly journals xCos: An Explainable Cosine Metric for Face Verification Task

Author(s):  
Yu-Sheng Lin ◽  
Zhe-Yu Liu ◽  
Yu-An Chen ◽  
Yu-Siang Wang ◽  
Ya-Liang Chang ◽  
...  

We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.

Algorithms ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 268
Author(s):  
Huoyou Li ◽  
Jianshiun Hu ◽  
Jingwen Yu ◽  
Ning Yu ◽  
Qingqiang Wu

With the application of deep convolutional neural networks, the performance of computer vision tasks has been improved to a new level. The construction of a deeper and more complex network allows the face recognition algorithm to obtain a higher accuracy, However, the disadvantages of large computation and storage costs of neural networks limit the further popularization of the algorithm. To solve this problem, we have studied the unified and efficient neural network face recognition algorithm under the condition of a single camera; we propose that the complete face recognition process consists of four tasks: face detection, in vivo detection, keypoint detection, and face verification; combining the key algorithms of these four tasks, we propose a unified network model based on a deep separable convolutional structure—UFaceNet. The model uses multisource data to carry out multitask joint training and uses the keypoint detection results to aid the learning of other tasks. It further introduces the attention mechanism through feature level clipping and alignment to ensure the accuracy of the model, using the shared convolutional layer network among tasks to reduce model calculations amount and realize network acceleration. The learning goal of multi-tasking implicitly increases the amount of training data and different data distribution, making it easier to learn the characteristics with generalization. The experimental results show that the UFaceNet model is better than other models in terms of calculation amount and number of parameters with higher efficiency, and some potential areas to be used.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 191
Author(s):  
Wenting Liu ◽  
Li Zhou ◽  
Jie Chen

Face recognition algorithms based on deep learning methods have become increasingly popular. Most of these are based on highly precise but complex convolutional neural networks (CNNs), which require significant computing resources and storage, and are difficult to deploy on mobile devices or embedded terminals. In this paper, we propose several methods to improve the algorithms for face recognition based on a lightweight CNN, which is further optimized in terms of the network architecture and training pattern on the basis of MobileFaceNet. Regarding the network architecture, we introduce the Squeeze-and-Excitation (SE) block and propose three improved structures via a channel attention mechanism—the depthwise SE module, the depthwise separable SE module, and the linear SE module—which are able to learn the correlation of information between channels and assign them different weights. In addition, a novel training method for the face recognition task combined with an additive angular margin loss function is proposed that performs the compression and knowledge transfer of the deep network for face recognition. Finally, we obtained high-precision and lightweight face recognition models with fewer parameters and calculations that are more suitable for applications. Through extensive experiments and analysis, we demonstrate the effectiveness of the proposed methods.


Author(s):  
Reshma P ◽  
Muneer VK ◽  
Muhammed Ilyas P

Face recognition is a challenging task for the researches. It is very useful for personal verification and recognition and also it is very difficult to implement due to all different situation that a human face can be found. This system makes use of the face recognition approach for the computerized attendance marking of students or employees in the room environment without lectures intervention or the employee. This system is very efficient and requires very less maintenance compared to the traditional methods. Among existing methods PCA is the most efficient technique. In this project Holistic based approach is adapted. The system is implemented using MATLAB and provides high accuracy.


2017 ◽  
Vol 126 (2-4) ◽  
pp. 272-291 ◽  
Author(s):  
Jun-Cheng Chen ◽  
Rajeev Ranjan ◽  
Swami Sankaranarayanan ◽  
Amit Kumar ◽  
Ching-Hui Chen ◽  
...  

2014 ◽  
Vol 971-973 ◽  
pp. 1710-1713
Author(s):  
Wen Huan Wu ◽  
Ying Jun Zhao ◽  
Yong Fei Che

Face detection is the key point in automatic face recognition system. This paper introduces the face detection algorithm with a cascade of Adaboost classifiers and how to configure OpenCV in MCVS. Using OpenCV realized the face detection. And a detailed analysis of the face detection results is presented. Through experiment, we found that the method used in this article has a high accuracy rate and better real-time.


2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Bilal Turan ◽  
Taisuke Masuda ◽  
Anas Mohd Noor ◽  
Koji Horio ◽  
Toshiki I. Saito ◽  
...  

Author(s):  
Ridha Ilyas Bendjillali ◽  
Mohammed Beladgham ◽  
Khaled Merit ◽  
Abdelmalik Taleb-Ahmed

<p><span>In the last decade, facial recognition techniques are considered the most important fields of research in biometric technology. In this research paper, we present a Face Recognition (FR) system divided into three steps: The Viola-Jones face detection algorithm, facial image enhancement using Modified Contrast Limited Adaptive Histogram Equalization algorithm (M-CLAHE), and feature learning for classification. For learning the features followed by classification we used VGG16, ResNet50 and Inception-v3 Convolutional Neural Networks (CNN) architectures for the proposed system. Our experimental work was performed on the Extended Yale B database and CMU PIE face database. Finally, the comparison with the other methods on both databases shows the robustness and effectiveness of the proposed approach. Where the Inception-v3 architecture has achieved a rate of 99, 44% and 99, 89% respectively.</span></p>


Author(s):  
O.N. Korsun ◽  
V.N. Yurko

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting


Author(s):  
Fang Chu ◽  
Lipo Wang

Accurate diagnosis of cancers is of great importance for doctors to choose a proper treatment. Furthermore, it also plays a key role in the searching for the pathology of cancers and drug discovery. Recently, this problem attracts great attention in the context of microarray technology. Here, we apply radial basis function (RBF) neural networks to this pattern recognition problem. Our experimental results in some well-known microarray data sets indicate that our method can obtain very high accuracy with a small number of genes.


Sign in / Sign up

Export Citation Format

Share Document