Gradient feature matching for expression invariant face recognition using single reference image

Author(s):  
Ann Theja Alex ◽  
Vijayan K. Asari ◽  
Alex Mathew
2021 ◽  
pp. 387-392
Author(s):  
Ganesh Gopalrao Patil ◽  
Rohitash Kumar Banyal

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 146 ◽  
Author(s):  
Vittorio Cuculo ◽  
Alessandro D’Amelio ◽  
Giuliano Grossi ◽  
Raffaella Lanzarotti ◽  
Jianyi Lin

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LiMapS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.


2011 ◽  
Vol 121-126 ◽  
pp. 609-616
Author(s):  
Dao Qing Sheng ◽  
Guo Yue Chen ◽  
Kazuki Saruta ◽  
Yuki Terata

In this paper, an approach based on local curvature feature matching for 3D face recognition is proposed. K-L transformation is employed to adjust coordinate system and coarsely align 3D point cloud. Based on B-splines approximation, 3D facial surface reconstruction is implemented. Through analyzing curvature features of the fitted surface, local rigid facial patches are extracted. According to the extracted local patches, feature vectors are constructed to execute final recognition. Experimental results demonstrate high performance of the presented method and also show that the method is fairly effective for 3D face recognition.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jin Yang ◽  
Yuxuan Zhao ◽  
Shihao Yang ◽  
Xinxin Kang ◽  
Xinyan Cao ◽  
...  

In face recognition systems, highly robust facial feature representation and good classification algorithm performance can affect the effect of face recognition under unrestricted conditions. To explore the anti-interference performance of convolutional neural network (CNN) reconstructed by deep learning (DL) framework in face image feature extraction (FE) and recognition, in the paper, first, the inception structure in the GoogleNet network and the residual error in the ResNet network structure are combined to construct a new deep reconstruction network algorithm, with the random gradient descent (SGD) and triplet loss functions as the model optimizer and classifier, respectively, and it is applied to the face recognition in Labeled Faces in the Wild (LFW) face database. Then, the portrait pyramid segmentation and local feature point segmentation are applied to extract the features of face images, and the matching of face feature points is achieved using Euclidean distance and joint Bayesian method. Finally, Matlab software is used to simulate the algorithm proposed in this paper and compare it with other algorithms. The results show that the proposed algorithm has the best face recognition effect when the learning rate is 0.0004, the attenuation coefficient is 0.0001, the training method is SGD, and dropout is 0.1 (accuracy: 99.03%, loss: 0.0047, training time: 352 s, and overfitting rate: 1.006), and the algorithm proposed in this paper has the largest mean average precision compared to other CNN algorithms. The correct rate of face feature matching of the algorithm proposed in this paper is 84.72%, which is higher than LetNet-5, VGG-16, and VGG-19 algorithms, the correct rates of which are 6.94%, 2.5%, and 1.11%, respectively, but lower than GoogleNet, AlexNet, and ResNet algorithms. At the same time, the algorithm proposed in this paper has a faster matching time (206.44 s) and a higher correct matching rate (88.75%) than the joint Bayesian method, indicating that the deep reconstruction network algorithm proposed in this paper can be used in face image recognition, FE, and matching, and it has strong anti-interference.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Sajid Khan ◽  
Dong-Ho Lee ◽  
Asif Khan ◽  
Ahmad Waqas ◽  
Abdul Rehman Gilal ◽  
...  

Fingerprint registration and verification is an active area of research in the field of image processing. Usually, fingerprints are obtained from sensors; however, there is recent interest in using images of fingers obtained from digital cameras instead of scanners. An unaddressed issue in the processing of fingerprints extracted from digital images is the angle of the finger during image capture. To match a fingerprint with 100% accuracy, the angles of the matching features should be similar. This paper proposes a rotation and scale-invariant decision-making method for the intelligent registration and recognition of fingerprints. A digital image of a finger is taken as the input and compared with a reference image for derotation. Derotation is performed by applying binary segmentation on both images, followed by the application of speeded up robust feature (SURF) extraction and then feature matching. Potential inliers are extracted from matched features by applying the M-estimator. Matched inlier points are used to form a homography matrix, the difference in the rotation angles of the finger in both the input and reference images is calculated, and finally, derotation is performed. Input fingerprint features are extracted and compared or stored based on the decision support system required for the situation.


2019 ◽  
Vol 9 (17) ◽  
pp. 3598 ◽  
Author(s):  
Erhu Zhang ◽  
Yajun Chen ◽  
Min Gao ◽  
Jinghong Duan ◽  
Cuining Jing

In the printing industry, defect detection is of crucial importance for ensuring the quality of printed matter. However, rarely has research been conducted for web offset printing. In this paper, we propose an automatic defect detection method for web offset printing, which consists of determining first row of captured images, image registration and defect detection. Determining the first row of captured images is a particular problem of web offset printing, which has not been studied before. To solve this problem, a fast computational algorithm based on image projection is given, which can convert 2D image searching into 1D feature matching. For image registration, a shape context descriptor is constructed by considering the shape concave-convex feature, which can effectively reduce the dimension of features compared with the traditional image registration method. To tolerate the position difference and brightness deviation between the detected image and the reference image, a modified image subtraction is proposed for defect detection. The experimental results demonstrate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document