scholarly journals CosFace: Large Margin Cosine Loss for Deep Face Recognition

Author(s):  
Hao Wang ◽  
Yitong Wang ◽  
Zheng Zhou ◽  
Xing Ji ◽  
Dihong Gong ◽  
...  
Author(s):  
K.L. Li ◽  
Y.R. Zhao ◽  
Y.B. Li ◽  
Y.X. Zhang ◽  
X.F. Geng

2011 ◽  
Vol 21 (2) ◽  
pp. 269-279 ◽  
Author(s):  
Nanhai Yang ◽  
Ran He ◽  
Wei-Shi Zheng ◽  
Xiukun Wang

2021 ◽  
Vol 11 (16) ◽  
pp. 7310
Author(s):  
Hongxia Deng ◽  
Zijian Feng ◽  
Guanyu Qian ◽  
Xindong Lv ◽  
Haifang Li ◽  
...  

The world today is being hit by COVID-19. As opposed to fingerprints and ID cards, facial recognition technology can effectively prevent the spread of viruses in public places because it does not require contact with specific sensors. However, people also need to wear masks when entering public places, and masks will greatly affect the accuracy of facial recognition. Accurately performing facial recognition while people wear masks is a great challenge. In order to solve the problem of low facial recognition accuracy with mask wearers during the COVID-19 epidemic, we propose a masked-face recognition algorithm based on large margin cosine loss (MFCosface). Due to insufficient masked-face data for training, we designed a masked-face image generation algorithm based on the detection of the detection of key facial features. The face is detected and aligned through a multi-task cascaded convolutional network; and then we detect the key features of the face and select the mask template for coverage according to the positional information of the key features. Finally, we generate the corresponding masked-face image. Through analysis of the masked-face images, we found that triplet loss is not applicable to our datasets, because the results of online triplet selection contain fewer mask changes, making it difficult for the model to learn the relationship between mask occlusion and feature mapping. We use a large margin cosine loss as the loss function for training, which can map all the feature samples in a feature space with a smaller intra-class distance and a larger inter-class distance. In order to make the model pay more attention to the area that is not covered by the mask, we designed an Att-inception module that combines the Inception-Resnet module and the convolutional block attention module, which increases the weight of any unoccluded area in the feature map, thereby enlarging the unoccluded area’s contribution to the identification process. Experiments on several masked-face datasets have proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition with masked subjects.


2019 ◽  
Vol 10 (1) ◽  
pp. 60 ◽  
Author(s):  
Shengwei Zhou ◽  
Caikou Chen ◽  
Guojiang Han ◽  
Xielian Hou

Learning large-margin face features whose intra-class variance is small and inter-class diversity is one of important challenges in feature learning applying Deep Convolutional Neural Networks (DCNNs) for face recognition. Recently, an appealing line of research is to incorporate an angular margin in the original softmax loss functions for obtaining discriminative deep features during the training of DCNNs. In this paper we propose a novel loss function, termed as double additive margin Softmax loss (DAM-Softmax). The presented loss has a clearer geometrical explanation and can obtain highly discriminative features for face recognition. Extensive experimental evaluation of several recent state-of-the-art softmax loss functions are conducted on the relevant face recognition benchmarks, CASIA-Webface, LFW, CALFW, CPLFW, and CFP-FP. We show that the proposed loss function consistently outperforms the state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document