scholarly journals Weighted Module Linear Regression Classifications for Partially-Occluded Face Recognition

2021 ◽  
Author(s):  
Wei-Jong Yang ◽  
Cheng-Yu Lo ◽  
Pau-Choo Chung ◽  
Jar Ferr Yang

Face images with partially-occluded areas create huge deteriorated problems for face recognition systems. Linear regression classification (LRC) is a simple and powerful approach for face recognition, of course, it cannot perform well under occlusion situations as well. By segmenting the face image into small subfaces, called modules, the LRC system could achieve some improvements by selecting the best non-occluded module for face classification. However, the recognition performance will be deteriorated due to the usage of the module, a small portion of the face image. We could further enhance the performance if we can properly identify the occluded modules and utilize all the non-occluded modules as many as possible. In this chapter, we first analyze the texture histogram (TH) of the module and then use the HT difference to measure its occlusion tendency. Thus, based on TH difference, we suggest a general concept of the weighted module face recognition to solve the occlusion problem. Thus, the weighted module linear regression classification method, called WMLRC-TH, is proposed for partially-occluded fact recognition. To evaluate the performances, the proposed WMLRC-TH method, which is tested on AR and FRGC2.0 face databases with several synthesized occlusions, is compared to the well-known face recognition methods and other robust face recognition methods. Experimental results show that the proposed method achieves the best performance for recognize occluded faces. Due to its simplicity in both training and testing phases, a face recognition system based on the WMLRC-TH method is realized on Android phones for fast recognition of occluded faces.

Author(s):  
Almabrok Essa ◽  
Vijayan K. Asari

This paper presents an illumination invariant face recognition system that uses directional features and modular histogram. The proposed Histogram of Oriented Directional Features (HODF) produces multi-region histograms for each face image, then concatenates these histograms to form the final feature vector. This feature vector is used to recognize the face image by the help of k nearest neighbors classifier (KNN). The edge responses and the relationship among pixels are very important and play the main role for improving the face recognition accuracy. Therefore, this work presents the effectiveness of using different directional masks for detecting the edge responses on face recognition accuracy, such as Prewitt kernels, Kirsch masks, Sobel kernels, and Gaussian derivative masks. The performance evaluation of the proposed HODF algorithm is conducted on several publicly available databases and observed promising recognition rates.


Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


2012 ◽  
Vol 224 ◽  
pp. 485-488
Author(s):  
Fei Li ◽  
Yuan Yuan Wang

Abstract: In order to solve the easily copied problem of images in face recognition software, an algorithm combining the image feature with digital watermark is presented in this paper. As watermark information, image feature of the adjacent blocks are embedded to the face image. And primitive face images are not needed when recovering the watermark. So face image integrity can be well confirmed, and the algorithm can detect whether the face image is the original one and identify whether the face image is attacked by malicious aim-such as tampering, replacing or illegally adding. Experimental results show that the algorithm with good invisibility and excellent robustness has no interference on face recognition rate, and it can position the specific tampered location of human face image.


Author(s):  
Kalyan Chakravarthi. M

Abstract: Recognition from faces is a popular and significant technology in recent years. Face alterations and the presence of different masks make it too much challenging. In the real-world, when a person is uncooperative with the systems such as in video surveillance then masking is further common scenarios. For these masks, current face recognition performance degrades. Still, difficulties created by masks are usually disregarded. Face recognition is a promising area of applied computer vision . This technique is used to recognize a face or identify a person automatically from given images. In our daily life activates like, in a passport checking, smart door, access control, voter verification, criminal investigation, and many other purposes face recognition is widely used to authenticate a person correctly and automatically. Face recognition has gained much attention as a unique, reliable biometric recognition technology that makes it most popular than any other biometric technique likes password, pin, fingerprint, etc. Many of the governments across the world also interested in the face recognition system to secure public places such as parks, airports, bus stations, and railway stations, etc. Face recognition is one of the well-studied real-life problems. Excellent progress has been done against face recognition technology throughout the last years. The primary concern to this work is about facial masks, and especially to enhance the recognition accuracy of different masked faces. A feasible approach has been proposed that consists of first detecting the facial regions. The occluded face detection problem has been approached using Cascaded Convolutional Neural Network (CNN). Besides, its performance has been also evaluated within excessive facial masks and found attractive outcomes. Finally, a correlative study also made here for a better understanding.


Author(s):  
I Nyoman Gede Arya Astawa ◽  
I Ketut Gede Darma Putra ◽  
I Made Sudarma ◽  
Rukmi Sari Hartati

One of the factors that affects the detection system or face recognition is lighting. Image color processing can help the face recognition system in poor lighting conditions. In this study, homomorphic filtering and intensity normalization methods used to help improve the accuracy of face image detection. The experimental results show that the non-uniform of the illumination of the face image can be uniformed using the intensity normalization method with the average value of Peak Signal to Noise Ratio (PSNR) obtained from the whole experiment is 22.05314 and the average Absolute Mean Brightness Error (AMBE) value obtained is 6.147787. The results showed that homomorphic filtering and intensity normalization methods can be used to improve the detection accuracy of a face image.


Author(s):  
Ting Shan ◽  
Abbas Bigdeli ◽  
Brian C. Lovell ◽  
Shaokang Chen

In this chapter, we propose a pose variability compensation technique, which synthesizes realistic frontal face images from nonfrontal views. It is based on modeling the face via active appearance models and estimating the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. The proposed recognition techniques, though advanced, are not computationally intensive. So they are quite well suited to the embedded system environment. Indeed, the authors have implemented an early prototype of a face recognition module on a mobile camera phone so the camera can be used to identify the person holding the phone.


2017 ◽  
Vol 17 (01) ◽  
pp. 1750005 ◽  
Author(s):  
Aruna Bhat

A methodology for makeup invariant robust face recognition based on features from accelerated segment test and Eigen vectors is proposed. Makeup and cosmetic changes in face have been a major cause of security breaches since long time. It is not only difficult for human eyes to catch an imposter but also an equally daunting task for a face recognition system to correctly identify an individual owing to changes brought about in face due to makeup. As a crucial pre-processing step, the face is first divided into various segments centered on the eyes, nose, lips and cheeks. FAST algorithm is then applied over the face images. The features thus derived from the facial image act as the fiducial points for that face. Thereafter principal component analysis is applied over the set of fiducial points in each segment of every face image present in the data sets in order to compute the Eigen vectors and the Eigen values. The resultant principal component which is the Eigen vector with the highest Eigen value yields the direction of the features in that segment. The principal components thus obtained using fiducial points generated from FAST in each segment of the test and the training data are compared in order to get the best match or no match.


Sign in / Sign up

Export Citation Format

Share Document