Histogram of Oriented Directional Features for Robust Face Recognition

Author(s):  
Almabrok Essa ◽  
Vijayan K. Asari

This paper presents an illumination invariant face recognition system that uses directional features and modular histogram. The proposed Histogram of Oriented Directional Features (HODF) produces multi-region histograms for each face image, then concatenates these histograms to form the final feature vector. This feature vector is used to recognize the face image by the help of k nearest neighbors classifier (KNN). The edge responses and the relationship among pixels are very important and play the main role for improving the face recognition accuracy. Therefore, this work presents the effectiveness of using different directional masks for detecting the edge responses on face recognition accuracy, such as Prewitt kernels, Kirsch masks, Sobel kernels, and Gaussian derivative masks. The performance evaluation of the proposed HODF algorithm is conducted on several publicly available databases and observed promising recognition rates.

2021 ◽  
Author(s):  
Wei-Jong Yang ◽  
Cheng-Yu Lo ◽  
Pau-Choo Chung ◽  
Jar Ferr Yang

Face images with partially-occluded areas create huge deteriorated problems for face recognition systems. Linear regression classification (LRC) is a simple and powerful approach for face recognition, of course, it cannot perform well under occlusion situations as well. By segmenting the face image into small subfaces, called modules, the LRC system could achieve some improvements by selecting the best non-occluded module for face classification. However, the recognition performance will be deteriorated due to the usage of the module, a small portion of the face image. We could further enhance the performance if we can properly identify the occluded modules and utilize all the non-occluded modules as many as possible. In this chapter, we first analyze the texture histogram (TH) of the module and then use the HT difference to measure its occlusion tendency. Thus, based on TH difference, we suggest a general concept of the weighted module face recognition to solve the occlusion problem. Thus, the weighted module linear regression classification method, called WMLRC-TH, is proposed for partially-occluded fact recognition. To evaluate the performances, the proposed WMLRC-TH method, which is tested on AR and FRGC2.0 face databases with several synthesized occlusions, is compared to the well-known face recognition methods and other robust face recognition methods. Experimental results show that the proposed method achieves the best performance for recognize occluded faces. Due to its simplicity in both training and testing phases, a face recognition system based on the WMLRC-TH method is realized on Android phones for fast recognition of occluded faces.


2012 ◽  
Vol 224 ◽  
pp. 485-488
Author(s):  
Fei Li ◽  
Yuan Yuan Wang

Abstract: In order to solve the easily copied problem of images in face recognition software, an algorithm combining the image feature with digital watermark is presented in this paper. As watermark information, image feature of the adjacent blocks are embedded to the face image. And primitive face images are not needed when recovering the watermark. So face image integrity can be well confirmed, and the algorithm can detect whether the face image is the original one and identify whether the face image is attacked by malicious aim-such as tampering, replacing or illegally adding. Experimental results show that the algorithm with good invisibility and excellent robustness has no interference on face recognition rate, and it can position the specific tampered location of human face image.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


Author(s):  
I Nyoman Gede Arya Astawa ◽  
I Ketut Gede Darma Putra ◽  
I Made Sudarma ◽  
Rukmi Sari Hartati

One of the factors that affects the detection system or face recognition is lighting. Image color processing can help the face recognition system in poor lighting conditions. In this study, homomorphic filtering and intensity normalization methods used to help improve the accuracy of face image detection. The experimental results show that the non-uniform of the illumination of the face image can be uniformed using the intensity normalization method with the average value of Peak Signal to Noise Ratio (PSNR) obtained from the whole experiment is 22.05314 and the average Absolute Mean Brightness Error (AMBE) value obtained is 6.147787. The results showed that homomorphic filtering and intensity normalization methods can be used to improve the detection accuracy of a face image.


Author(s):  
Ting Shan ◽  
Abbas Bigdeli ◽  
Brian C. Lovell ◽  
Shaokang Chen

In this chapter, we propose a pose variability compensation technique, which synthesizes realistic frontal face images from nonfrontal views. It is based on modeling the face via active appearance models and estimating the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. The proposed recognition techniques, though advanced, are not computationally intensive. So they are quite well suited to the embedded system environment. Indeed, the authors have implemented an early prototype of a face recognition module on a mobile camera phone so the camera can be used to identify the person holding the phone.


Author(s):  
Tang-Tang Yi ◽  

In order to solve the problem of low recognition accuracy in recognition of 3D face images collected by traditional sensors, a face recognition algorithm for 3D point cloud collected by mixed image sensors is proposed. The algorithm first uses the 3D wheelbase to expand the face image edge. According to the 3D wheelbase, the noise of extended image is detected, and median filtering is used to eliminate the detected noise. Secondly, the priority of the boundary pixels to recognize the face image in the denoising image recognition process is determined, and the key parts such as the illuminance line are analyzed, so that the recognition of the 3D point cloud face image is completed. Experiments show that the proposed algorithm improves the recognition accuracy of 3D face images, which recognition time is lower than that of the traditional algorithm by about 4 times, and the recognition efficiency is high.


2017 ◽  
Vol 17 (01) ◽  
pp. 1750005 ◽  
Author(s):  
Aruna Bhat

A methodology for makeup invariant robust face recognition based on features from accelerated segment test and Eigen vectors is proposed. Makeup and cosmetic changes in face have been a major cause of security breaches since long time. It is not only difficult for human eyes to catch an imposter but also an equally daunting task for a face recognition system to correctly identify an individual owing to changes brought about in face due to makeup. As a crucial pre-processing step, the face is first divided into various segments centered on the eyes, nose, lips and cheeks. FAST algorithm is then applied over the face images. The features thus derived from the facial image act as the fiducial points for that face. Thereafter principal component analysis is applied over the set of fiducial points in each segment of every face image present in the data sets in order to compute the Eigen vectors and the Eigen values. The resultant principal component which is the Eigen vector with the highest Eigen value yields the direction of the features in that segment. The principal components thus obtained using fiducial points generated from FAST in each segment of the test and the training data are compared in order to get the best match or no match.


2019 ◽  
Vol 8 (3) ◽  
pp. 33
Author(s):  
Herman Kh. Omar ◽  
Nada E. Tawfiq

In the recent time bioinformatics take wide field in image processing. Face recognition which is basically the task of recognizing a person based on its facial image. It has become very popular in the last two decades, mainly because of the new methods developed and the high quality of the current visual instruments. There are different types of face recognition algorithms, and each method has a different approach to extract the image features and perform the matching with the input image. In this paper the Local Binary Patterns (LBP) was used, which is a particular case of the Texture Spectrum model, and powerful feature for texture classification. The face recognition system consists of recognizing the faces acquisition from a given data base via two phases. The most useful and unique features of the face image are extracted in the feature extraction phase. In the classification the face image is compared with the images from the database. The proposed algorithm for face recognition in this paper adopt the LBP features encode local texture information with default values. Apply histogram equalization and Resize the image into 80x60, divide it to five blocks, then Save every LBP feature as a vector table. Matlab R2019a was used to build the face recognition system. The Results which obtained are accurate and they are 98.8% overall (500 face image).


Author(s):  
Abdul Quyoom

Face recognition is a hard and special case of computer vision and pattern recognition. It is a challenging problem due to various kinds of variations of face images.  This paper proposes a robust face recognition system. Here stepwise linear discriminant analysis (SWLDA) is used for the feature extraction and Linear Vector Quantization (LVQ) Classifier is used for face recognition. The main focus of SWLDA is to select localized features from the face. In order to increase the low-between-class variance and to reduce within-class-variance among different expression classes and use F-test value through which results are analyzed. In recognition, firstly face is detected using canny edge detection method, after face detection SWLDA is employed to extract the face features, and end linear vector quantization is applied for face recognition. To achieve optimum results and increase the robustness of the proposed system, experiments are performed on various different samples of face image, which consist of face image with the different pose and facial expression in order to validate the system, we use two famous datasets which include Yale and ORL face database.


Author(s):  
G. A. KHUWAJA ◽  
M. S. LAGHARI

The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. We address two problems: (a) automatic recognition of human faces using a novel fusion approach based on an adaptive LVQ network architecture, and (b) improve the face recognition up to 100% while maintaining the learning time per face image constant, which is an scalability issue. The learning time per face image of the recognition system remains constant irrespective of the data size. The integration of the system incorporates the "divide and conquer" modularity principles, i.e. divide the learning data into small modules, train individual modules separately using compact LVQ model structure and still encompass all the variance, and fuse trained modules to achieve recognition rate nearly 100%. The concept of Merged Classes (MCs) is introduced to enhance the accuracy rate. The proposed integrated architecture has shown its feasibility using a collection of 1130 face images of 158 subjects from three standard databases, ORL, PICS and KU. Empirical results yield an accuracy rate of 100% on the face recognition task for 40 subjects in 0.056 seconds per image. Thus, the system has shown potential to be adopted for real time application domains.


Sign in / Sign up

Export Citation Format

Share Document