Transform Based Feature Extraction and Dimensionality Reduction Techniques

Author(s):  
Dattatray V. Jadhav ◽  
V. Jadhav Dattatray ◽  
Raghunath S. Holambe ◽  
S. Holambe Raghunath

Various changes in illumination, expression, viewpoint, and plane rotation present challenges to face recognition. Low dimensional feature representation with enhanced discrimination power is of paramount importance to face recognition system. This chapter presents transform based techniques for extraction of efficient and effective features to solve some of the challenges in face recognition. The techniques are based on the combination of Radon transform, Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT). The property of Radon transform to enhance the low frequency components, which are useful for face recognition, has been exploited to derive the effective facial features. The comparative study of various transform based techniques under different conditions like varying illumination, changing facial expressions, and in-plane rotation is presented in this chapter. The experimental results using FERET, ORL, and Yale databases are also presented in the chapter.

2020 ◽  
Author(s):  
Bilal Salih Abed Alhayani ◽  
Milind Rane

A wide variety of systems require reliable person recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that only a legitimate user and no one else access the rendered services. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. Face can be used as Biometrics for person verification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. We treats face recognition as a two-dimensional recognition problem. A well-known technique of Principal Component Analysis (PCA) is used for face recognition. Face images are projected onto a face space that encodes best variation among known face images. The face space is defined by Eigen face which are eigenvectors of the set of faces, which may not correspond to general facial features such as eyes, nose, lips. The system performs by projecting pre extracted face image onto a set of face space that represent significant variations among known face images. The variable reducing theory of PCA accounts for the smaller face space than the training set of face. A Multire solution features based pattern recognition system used for face recognition based on the combination of Radon and wavelet transforms. As the Radon transform is in-variant to rotation and a Wavelet Transform provides the multiple resolution. This technique is robust for face recognition. The technique computes Radon projections in different orientations and captures the directional features of face images. Further, the wavelet transform applied on Radon space provides multire solution features of the facial images. Being the line integral, Radon transform improves the low-frequency components that are useful in face recognition


2018 ◽  
Vol 26 (10) ◽  
pp. 131-139 ◽  
Author(s):  
Manar Abdulkaream Al-Abaji ◽  
Meaad Mohammed Salih

The process of data dimension reduction plays an important role in any  face recognition system because many of these data are repetitive and irrelevant and this cause a problem in applications of data mining and learning the machine. The main purpose is to improve the performance of recognition by eliminating repetitive features.           In this research, a number of data reduction techniques were used like: Principal Component Analysis, Gray-Level Co-occurrence Matrix and Discrete Wavelet Transform for extracting the most important features from the images of persons. A different number of training and testing images were used to compare the performance of each of the techniques above in the recognition process. Euclidean distance scale was used to get results.  


Author(s):  
ZHAOKUI LI ◽  
LIXIN DING ◽  
YAN WANG ◽  
JINRONG HE

This paper proposes a simple, yet very powerful local face representation, called the Gradient Orientations and Euler Mapping (GOEM). GOEM consists of two stages: gradient orientations and Euler mapping. In the first stage, we calculate gradient orientations of a central pixel and get the corresponding orientation representations by performing convolution operator. These representation results display spatial locality and orientation properties. To encompass different spatial localities and orientations, we concatenate all these representation results and derive a concatenated orientation feature vector. In the second stage, we define an explicit Euler mapping which maps the space of the concatenated orientation into a complex space. For a mapping image, we find that the imaginary part and the real part characterize the high frequency and the low frequency components, respectively. To encompass different frequencies, we concatenate the imaginary part and the real part and derive a concatenated mapping feature vector. For a given image, we use the two stages to construct a GOEM image and derive an augmented feature vector which resides in a space of very high dimensionality. In order to derive low-dimensional feature vector, we present a class of GOEM-based kernel subspace learning methods for face recognition. These methods, which are robust to changes in occlusion and illumination, apply the kernel subspace learning model with explicit Euler mapping to an augmented feature vector derived from the GOEM representation of face images. Experimental results show that our methods significantly outperform popular methods and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition.


Author(s):  
HEYDI MENDEZ-VÁZQUEZ ◽  
JOSEF KITTLER ◽  
CHI HO CHAN ◽  
EDEL GARCÍA-REYES

Variations in illumination is one of major limiting factors of face recognition system performance. The effect of changes in the incident light on face images is analyzed, as well as its influence on the low frequency components of the image. Starting from this analysis, a new photometric normalization method for illumination invariant face recognition is presented. Low-frequency Discrete Cosine Transform coefficients in the logarithmic domain are used in a local way to reconstruct a slowly varying component of the face image which is caused by illumination. After smoothing, this component is subtracted from the original logarithmic image to compensate for illumination variations. Compared to other preprocessing algorithms, our method achieved a very good performance with a total error rate very similar to that produced by the best performing state-of-the-art algorithm. An in-depth analysis of the two preprocessing methods revealed notable differences in their behavior, which is exploited in a multiple classifier fusion framework to achieve further performance improvement. The superiority of the proposal is demonstrated in both face verification and identification experiments.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 157 ◽  
Author(s):  
Saad Allagwail ◽  
Osman Gedik ◽  
Javad Rahebi

In the practical reality of face recognition applications, the human face can have only a limited number of training images. However, it is known that, in general, increasing the number of training images also increases the performance of face recognition systems. In this case, a new set of training samples can be generated from the original samples, using the symmetry property of the face. Although many face recognition methods have been proposed in the literature, a robust face recognition system is still a challenging task. In this paper, recognition performance was improved by using the property of face symmetry. Moreover, the effects of illumination and pose variations were reduced. A Two-Dimensional Discrete Wavelet Transform, based on the Local Binary Pattern, which is a new approach for face recognition using symmetry, has been presented. The method has three main stages, preprocessing, feature extraction, and classification. A Two-Dimensional Discrete Wavelet Transform with Single-Level and Gaussian Low-Pass Filter were used, separately, for preprocessing. The Local Binary Pattern, Gray Level Co-Occurrence Matrix, and the Gabor filter were used for feature extraction, and the Euclidean Distance was used for classification. The proposed method was implemented and evaluated using the Olivetti Research Laboratory (ORL) and Yale datasets. This study also examined the importance of the preprocessing stage in a face recognition system. The experimental results showed that the proposed method had a recognition accuracy of 100%, for both the ORL and Yale datasets, and these recognition rates were higher than the methods in the literature.


2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
Fatma Zohra Chelali ◽  
Amar Djeradi

Face recognition has received a great attention from a lot of researchers in computer vision, pattern recognition, and human machine computer interfaces in recent years. Designing a face recognition system is a complex task due to the wide variety of illumination, pose, and facial expression. A lot of approaches have been developed to find the optimal space in which face feature descriptors are well distinguished and separated. Face representation using Gabor features and discrete wavelet has attracted considerable attention in computer vision and image processing. We describe in this paper a face recognition system using artificial neural networks like multilayer perceptron (MLP) and radial basis function (RBF) where Gabor and discrete wavelet based feature extraction methods are proposed for the extraction of features from facial images using two facial databases: the ORL and computer vision. Good recognition rate was obtained using Gabor and DWT parameterization with MLP classifier applied for computer vision dataset.


Author(s):  
Mourad Moussa ◽  
Maha Hmila ◽  
Ali Douik

Face recognition is a computer vision application based on biometric information for automatic person identification or verification from image sequence or a video frame. In this context DCT is the easy technique to determine significant parameters. Until now the main object is selection of the coefficients to obtain the best recognition. Many techniques rely on premasking windows to discard the high and low coefficients to enhance performance. However, the problem resides in the shape and size of premask. To improve discriminator ability in discrete cosine transform domain, we used fractional coefficients of the transformed images with discrete cosine transform to limit the coefficients area for a better performance system. Then from the selected bands, we use the discrimination power analysis to search for the coefficients having the highest power to discriminate different classes from each other. Feature selection algorithm is a key issue in all pattern recognition system, in fact this algorithm is utilized to define features vector among several ones, where these features are selected according a specified discrimination criterion. Many classifiers are used to evaluate our approach like, support vector machine and random forests. The proposed approach is validated with Yale and ORL Face databases. Experimental results prove the sufficiency of this method in face and facial expression recognition field.


Author(s):  
LAIYUN QING ◽  
SHIGUANG SHAN ◽  
WEN GAO ◽  
BO DU

The performances of the current face recognition systems suffer heavily from the variations in lighting. To deal with this problem, this paper presents an illumination normalization approach by relighting face images to a canonical illumination based on the harmonic images model. Benefiting from the observations that human faces share similar shape, and the albedos of the face surfaces are quasi-constant, we first estimate the nine low-frequency components of the illumination from the input facial image. The facial image is then normalized to the canonical illumination by re-rendering it using the illumination ratio image technique. For the purpose of face recognition, two kinds of canonical illuminations, the uniform illumination and a frontal flash with the ambient lights, are considered, among which the former encodes merely the texture information, while the latter encodes both the texture and shading information. Our experiments on the CMU-PIE face database and the Yale B face database have shown that the proposed relighting normalization can significantly improve the performance of a face recognition system when the probes are collected under varying lighting conditions.


Sign in / Sign up

Export Citation Format

Share Document