Advances in Face Image Analysis
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781615209910, 9781615209927

Author(s):  
Lior Shamir ◽  
Lior Shamir

While current face recognition algorithms have provided convincing performance on frontal face poses, recognition is far less effective when the pose and illumination conditions vary. Here the authors show how compound image transforms can be used for face recognition in various poses and illumination conditions. The method works by first dividing each image into four equal-sized tiles. Then, image features are extracted from the face images, transforms of the images, and transforms of transforms of the images. Finally, each image feature is assigned with a Fisher score, and test images are classified by using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as weights. Experimental results using the full color FERET dataset show that with no parameter tuning, the accuracy of rank-10 recognition for frontal, quarter-profile, and half-profile images is ~98%, ~94% and ~91%, respectively. The proposed method also achieves perfect accuracy on several other face recognition datasets such as Yale B, ORL and JAFFE. An important feature of this method is that the recognition accuracy improves as the number of subjects in the dataset gets larger.


Author(s):  
Vitomir Štruc ◽  
Vitomir Štruc ◽  
Nikola Pavešic ◽  
Nikola Pavešic

Face recognition technology has come a long way since its beginnings in the previous century. Due to its countless application possibilities, it has attracted the interest of research groups from universities and companies around the world. Thanks to this enormous research effort, the recognition rates achievable with the state-of-the-art face recognition technology are steadily growing, even though some issues still pose major challenges to the technology. Amongst these challenges, coping with illumination-induced appearance variations is one of the biggest and still not satisfactorily solved. A number of techniques have been proposed in the literature to cope with the impact of illumination ranging from simple image enhancement techniques, such as histogram equalization, to more elaborate methods, such as anisotropic smoothing or the logarithmic total variation model. This chapter presents an overview of the most popular and efficient normalization techniques that try to solve the illumination variation problem at the preprocessing level. It assesses the techniques on the YaleB and XM2VTS databases and explores their strengths and weaknesses from the theoretical and implementation point of view.


Author(s):  
Fadi Dornaika ◽  
Fadi Dornaika ◽  
Bogdan Raducanu ◽  
Bogdan Raducanu

This chapter addresses the recognition of basic facial expressions. It has three main contributions. First, the authors introduce a view- and texture independent schemes that exploits facial action parameters estimated by an appearance-based 3D face tracker. they represent the learned facial actions associated with different facial expressions by time series. Two dynamic recognition schemes are proposed: (1) the first is based on conditional predictive models and on an analysis-synthesis scheme, and (2) the second is based on examples allowing straightforward use of machine learning approaches. Second, the authors propose an efficient recognition scheme based on the detection of keyframes in videos. Third, the authors compare the dynamic scheme with a static one based on analyzing individual snapshots and show that in general the former performs better than the latter. The authors then provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM).


Author(s):  
Siu-Yeung Cho ◽  
Teik-Toe Teoh ◽  
Yok-Yen Nguwi

Facial expression recognition is a challenging task. A facial expression is formed by contracting or relaxing different facial muscles on human face that results in temporally deformed facial features like wide-open mouth, raising eyebrows or etc. The challenges of such system have to address with some issues. For instances, lighting condition is a very difficult problem to constraint and regulate. On the other hand, real-time processing is also a challenging problem since there are so many facial features to be extracted and processed and sometimes, conventional classifiers are not even effective in handling those features and produce good classification performance. This chapter discusses the issues on how the advanced feature selection techniques together with good classifiers can play a vital important role of real-time facial expression recognition. Several feature selection methods and classifiers are discussed and their evaluations for real-time facial expression recognition are presented in this chapter. The content of this chapter is a way to open-up a discussion about building a real-time system to read and respond to the emotions of people from facial expressions.


Author(s):  
Zakia Hammal ◽  
Zakia Hammal

This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday life situations. The authors also introduce the importance of taking advantage of the human strategy by reviewing advances of research in psychology towards multidisciplinary approach for facial expression classification. Finally, the authors describe one contribution which aims at dealing with some of the discussed challenges.


Author(s):  
Marios Kyperoundtas ◽  
Marios Kyperountas ◽  
Anastasios Tefas ◽  
Anastasios Tefas ◽  
Ioannis Pitas

Large training databases introduce a level of complexity that often degrades the classification performance of face recognition methods. In this chapter, an overview of various approaches that are employed in order to overcome this problem is presented and, in addition, a specific discriminant learning approach that combines dynamic training and partitioning is described in detail. This face recognition methodology employs dynamic training in order to implement a person-specific iterative classification process. This process employs discriminant clustering, where, by making use of an entropy-based measure, the algorithm adapts the coordinates of the discriminant space with respect to the characteristics of the test face. As a result, the training space is dynamically reduced to smaller spaces, where linear separability among the face classes is more likely to be achieved. The process iterates until one final cluster is retained, which consists of a single face class that represents the best match to the test face. The performance of this methodology is evaluated on standard large face databases and results show that the proposed framework gives a good solution to the face recognition problem.


Author(s):  
Peng Li ◽  
Peng Li ◽  
Simon J. D. Prince ◽  
Simon J. D. Prince

In this chapter the authors review probabilistic approaches to face recognition and present extended treatment of one particular approach. Here, the face image is decomposed into an additive sum of two parts: a deterministic component, which depends on an underlying representation of identity and a stochastic component which explains the fact that two face images from the same person are not identical. Inferences about matching are made by comparing different probabilistic models rather than comparing distance to an identity template in some projected space. The authors demonstrate that this model comparison is superior to distance comparison. Furthermore, the authors show that performance can be further improved by sampling the feature space and combining models trained using these feature subspaces. Both random sampling with and without replacement significantly improves performance. Finally, the authors illustrate how this probabilistic approach can be adapted for keypoint localization (e.g. finding the eyes, nose and mouth etc.). The keypoints can either be (1) explicitly localized by evaluating the likelihood of all the possible locations in the given image, or (2) implicitly localized by marginalizing over possible positions in a Bayesian manner. The authors show that recognition and keypoint localization performance are comparable to using manual labelling.


Author(s):  
Le Li ◽  
Le Li ◽  
Yu-Jin Zhang ◽  
Yu-Jin Zhang

Non-negative matrix factorization (NMF) is a more and more popular method for non-negative dimensionality reduction and feature extraction of non-negative data, especially face images. Currently no NMF algorithm holds not only satisfactory efficiency for dimensionality reduction and feature extraction of face images but also high ease of use. To improve the applicability of NMF, this chapter proposes a new monotonic, fixed-point algorithm called FastNMF by implementing least squares error-based non-negative factorization essentially according to the basic properties of parabola functions. The minimization problem corresponding to an operation in FastNMF can be analytically solved just by this operation, which is far beyond existing NMF algorithms’ power, and therefore FastNMF holds much higher efficiency, which is validated by a set of experimental results. For the simplicity of design philosophy, FastNMF is still one of NMF algorithms that are the easiest to use and the most comprehensible. Besides, theoretical analysis and experimental results also show that FastNMF tends to extract facial features with better representation ability than popular multiplicative update-based algorithms.


Author(s):  
M. Ashraful Amin ◽  
M. Ashraful Amin ◽  
Hong Yan ◽  
Hong Yan

In practice Gabor wavelet is often applied to extract relevant features from a facial image. This wavelet is constructed using filters of multiple scales and orientations. Based on Gabor’s theory of communication, two methods are proposed to acquire initial features from 2D images that are Gabor wavelet and Log-Gabor wavelet. Theoretically the main difference between these two wavelets is Log-Gabor wavelet produces DC free filter responses, whereas Gabor filter responses retain DC components. This experimental study determines the characteristics of Gabor and Log-Gabor filters for face recognition. In the experiment, two sixth order data tensor are created; one containing the basic Gabor feature vectors and the other containing the basic Log-Gabor feature vectors. This study reveals the characteristics of the filter orientations for Gabor and Log-Gabor filters for face recognition. These two implementations show that the Gabor filter having orientation zero means oriented at 0 degree with respect to the aligned face has the highest discriminating ability, while Log-Gabor filter with orientation three means 45 degree has the highest discriminating ability. This result is consistent across three different frequencies (scales) used for this experiment. It is also observed that for both the wavelets, filters with low frequency have higher discriminating ability.


Sign in / Sign up

Export Citation Format

Share Document