Kernel class specific centralized dictionary learning for face recognition

Author(s):  
Zhiming Gao ◽  
Qian Zhang ◽  
Ru Li ◽  
Bao-Di Liu ◽  
Yanjiang Wang
Author(s):  
Dongmei Wei ◽  
Tao Chen ◽  
Shuwei Li ◽  
Dongmei Jiang ◽  
Yuefeng Zhao ◽  
...  

2021 ◽  
Vol 25 (5) ◽  
pp. 1273-1290
Author(s):  
Shuangxi Wang ◽  
Hongwei Ge ◽  
Jinlong Yang ◽  
Shuzhi Su

It is an open question to learn an over-complete dictionary from a limited number of face samples, and the inherent attributes of the samples are underutilized. Besides, the recognition performance may be adversely affected by the noise (and outliers), and the strict binary label based linear classifier is not appropriate for face recognition. To solve above problems, we propose a virtual samples based robust block-diagonal dictionary learning for face recognition. In the proposed model, the original samples and virtual samples are combined to solve the small sample size problem, and both the structure constraint and the low rank constraint are exploited to preserve the intrinsic attributes of the samples. In addition, the fidelity term can effectively reduce negative effects of noise (and outliers), and the ε-dragging is utilized to promote the performance of the linear classifier. Finally, extensive experiments are conducted in comparison with many state-of-the-art methods on benchmark face datasets, and experimental results demonstrate the efficacy of the proposed method.


2014 ◽  
Vol 47 (4) ◽  
pp. 1559-1572 ◽  
Author(s):  
Weihua Ou ◽  
Xinge You ◽  
Dacheng Tao ◽  
Pengyue Zhang ◽  
Yuanyan Tang ◽  
...  

Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 146 ◽  
Author(s):  
Vittorio Cuculo ◽  
Alessandro D’Amelio ◽  
Giuliano Grossi ◽  
Raffaella Lanzarotti ◽  
Jianyi Lin

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LiMapS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.


2019 ◽  
Vol 95 ◽  
pp. 102573
Author(s):  
Heyou Chang ◽  
Fanlong Zhang ◽  
Guangwei Gao ◽  
Hao Zheng ◽  
Yang Chen

2016 ◽  
Vol 59 ◽  
pp. 14-25 ◽  
Author(s):  
Xiao-Yuan Jing ◽  
Fei Wu ◽  
Xiaoke Zhu ◽  
Xiwei Dong ◽  
Fei Ma ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document