scholarly journals UNSUPERVISED DISCOVERY OF VISUAL FACE CATEGORIES

2013 ◽  
Vol 22 (01) ◽  
pp. 1250029 ◽  
Author(s):  
SHICAI YANG ◽  
GEORGE BEBIS ◽  
MUHAMMAD HUSSAIN ◽  
GHULAM MUHAMMAD ◽  
ANWAR M. MIRZA

Human faces can be arranged into different face categories using information from common visual cues such as gender, ethnicity, and age. It has been demonstrated that using face categorization as a precursor step to face recognition improves recognition rates and leads to more graceful errors. Although face categorization using common visual cues yields meaningful face categories, developing accurate and robust gender, ethnicity, and age categorizers is a challenging issue. Moreover, it limits the overall number of possible face categories and, in practice, yields unbalanced face categories which can compromise recognition performance. This paper investigates ways to automatically discover a categorization of human faces from a collection of unlabeled face images without relying on predefined visual cues. Specifically, given a set of face images from a group of known individuals (i.e., gallery set), our goal is finding ways to robustly partition the gallery set (i.e., face categories). The objective is being able to assign novel images of the same individuals (i.e., query set) to the correct face category with high accuracy and robustness. To address the issue of face category discovery, we represent faces using local features and apply unsupervised learning (i.e., clustering). To categorize faces in novel images, we employ nearest-neighbor algorithms or learn the separating boundaries between face categories using supervised learning (i.e., classification). To improve face categorization robustness, we allow face categories to share local features as well as to overlap. We demonstrate the performance of the proposed approach through extensive experiments and comparisons using the FERET database.

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Xiaodong Li

2DPCA, which is one of the most important face recognition methods, is relatively sensitive to substantial variations in light direction, face pose, and facial expression. In order to improve the recognition performance of the traditional 2DPCA, a new 2DPCA algorithm based on the fuzzy theory is proposed in this paper, namely, the fuzzy 2DPCA (F2DPCA). In this method, applying fuzzy K-nearest neighbor (FKNN), the membership degree matrix of the training samples is calculated, which is used to get the fuzzy means of each class. The average of fuzzy means is then incorporated into the definition of the general scatter matrix with anticipation that it can improve classification result. The comprehensive experiments on the ORL, the YALE, and the FERET face database show that the proposed method can improve the classification rates and reduce the sensitivity to variations between face images caused by changes in illumination, face expression, and face pose.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Tai-Xiang Jiang ◽  
Ting-Zhu Huang ◽  
Xi-Le Zhao ◽  
Tian-Hui Ma

We have proposed a patch-based principal component analysis (PCA) method to deal with face recognition. Many PCA-based methods for face recognition utilize the correlation between pixels, columns, or rows. But the local spatial information is not utilized or not fully utilized in these methods. We believe that patches are more meaningful basic units for face recognition than pixels, columns, or rows, since faces are discerned by patches containing eyes and noses. To calculate the correlation between patches, face images are divided into patches and then these patches are converted to column vectors which would be combined into a new “image matrix.” By replacing the images with the new “image matrix” in the two-dimensional PCA framework, we directly calculate the correlation of the divided patches by computing the total scatter. By optimizing the total scatter of the projected samples, we obtain the projection matrix for feature extraction. Finally, we use the nearest neighbor classifier. Extensive experiments on the ORL and FERET face database are reported to illustrate the performance of the patch-based PCA. Our method promotes the accuracy compared to one-dimensional PCA, two-dimensional PCA, and two-directional two-dimensional PCA.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


Author(s):  
Amal A. Moustafa ◽  
Ahmed Elnakib ◽  
Nihal F. F. Areed

This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.


Author(s):  
V. Ramya ◽  
G. Sivashankari

Face recognition from the images is challenging due to the wide variability of face appearances and the complexity of the image background. This paper proposes a novel approach for recognizing the human faces. The recognition is done by comparing the characteristics of the new face to that of known individuals. It has Face localization part, where mouth end point and eyeballs will be obtained. In feature Extraction, Distance between eyeballs and mouth end point will be calculated. The recognition is performed by Neural Network (NN) using Back Propagation Networks (BPN) and Radial Basis Function (RBF) networks. The recognition performance of the proposed method is tabulated based on the experiments performed on a number of images.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Zhifei Wang ◽  
Zhenjiang Miao ◽  
Yanli Wan ◽  
Zhen Tang

Low resolution (LR) in face recognition (FR) surveillance applications will cause the problem of dimensional mismatch between LR image and its high-resolution (HR) template. In this paper, a novel method called kernel coupled cross-regression (KCCR) is proposed to deal with this problem. Instead of processing in the original observing space directly, KCCR projects LR and HR face images into a unified nonlinear embedding feature space using kernel coupled mappings and graph embedding. Spectral regression is further employed to improve the generalization performance and reduce the time complexity. Meanwhile, cross-regression is developed to fully utilize the HR embedding to increase the information of the LR space, thus to improve the recognition performance. Experiments on the FERET and CMU PIE face database show that KCCR outperforms the existing structure-based methods in terms of recognition rate as well as time complexity.


Author(s):  
Yoshinori Arai ◽  
◽  
Nguyen Thi Huong Lien ◽  
Kazuma Ishigaki ◽  
Hiroyuki Satoh ◽  
...  

The Fuzzy few-Nearest Neighbor (Ff-NN) method, which is an extended version of k-Nearest Neighbor algorithm (k-NN) and one of case-based learning methods, is proposed. Ff-NN intends to achieve stable identification performance even if the number of learning samples is as small as two. Applied to personal authentication systems such as enter/exit authorizations, Ff-NN reduces the user dictionary creation burden. Using 26 kinds of feature (face images and voices) data from 66 test objects, we conducted experiments on a PC to verify the feasibility of our proposed method. Forced recognition rate of conventional single-NN is 79.2% (standard deviation 2.83), and that of Ff-NN is 87.6% (SD 1.97). Recognition rates of dictionary data with 14, 17, and 26 features, are 90.6%, 92.5%, and 97.5%, respectively. We collect a very small number of nonintrusive samples so that two or more features are used to improve recognition performance. We present applicability of this method to personal authentication systems through experiments using 66 registrants, corresponding to 30 households.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Rong Wang

In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.


2022 ◽  
Author(s):  
Nishchal J

<p>Recent research has established the possibility of deducing soft-biometric attributes such as age, gender and race from an individual’s face image with high accuracy. Many techniques have been proposed to ensure user privacy, such as visible distortions to the images, manipulation of the original image with new face attributes, face swapping etc. Though these techniques achieve the goal of user privacy by fooling face recognition models, they don’t help the user when they want to upload original images without visible distortions or manipulation. The objective of this work is to implement techniques to ensure the privacy of user’s sensitive or personal data in face images by creating minimum pixel level distortions using white-box and black-box perturbation algorithms to fool AI models while maintaining the integrity of the image, so as to appear the same to a human eye.</p><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document