Pose and Illumination Invariance with Compound Image Transforms

Author(s):  
Lior Shamir ◽  
Lior Shamir

While current face recognition algorithms have provided convincing performance on frontal face poses, recognition is far less effective when the pose and illumination conditions vary. Here the authors show how compound image transforms can be used for face recognition in various poses and illumination conditions. The method works by first dividing each image into four equal-sized tiles. Then, image features are extracted from the face images, transforms of the images, and transforms of transforms of the images. Finally, each image feature is assigned with a Fisher score, and test images are classified by using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as weights. Experimental results using the full color FERET dataset show that with no parameter tuning, the accuracy of rank-10 recognition for frontal, quarter-profile, and half-profile images is ~98%, ~94% and ~91%, respectively. The proposed method also achieves perfect accuracy on several other face recognition datasets such as Yale B, ORL and JAFFE. An important feature of this method is that the recognition accuracy improves as the number of subjects in the dataset gets larger.

Author(s):  
Zhenxue Chen ◽  
Saisai Yao ◽  
Chengyun Liu ◽  
Lei Cai

With the development of biometric recognition technology, sketch face recognition has been widely applied to assist the police to confirm the identity of the criminal suspect. Most of the present recognition methods use the image features directly, in which the key parts can’t be used sufficiently. This paper presents a sketch face recognition method based on P-HOG multi-features weighted fusion. Firstly, the global face image and the local face image which contains key components of the face are divided into patches based on spatial scale pyramid, and then the global P-HOG features and local P-HOG features are extracted, respectively. After that, the dimensions of global and local features are reduced using PCA and NLDA. Finally, the features are weighted based on sensitivity and fused. The nearest neighbor classifier is used to complete the final recognition. The experimental results on different databases show that the proposed method outperforms state-of-the-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xia Miao ◽  
Ziyao Yu ◽  
Ming Liu

The partial differential equation learning model is applied to another high-level visual-processing problem: face recognition. A novel feature selection method based on partial differential equation learning model is proposed. The extracted features are invariant to rotation and translation and more robust to illumination changes. In the evaluation of students’ concentration in class, this paper firstly uses the face detection algorithm in face recognition technology to detect the face and intercept the expression data, and calculates the rise rate. Then, the improved model of concentration analysis and evaluation of a college Chinese class is used to recognize facial expression, and the corresponding weight is given to calculate the expression score. Finally, the head-up rate calculated at the same time is multiplied by the expression score as the final concentration score. Through the experiment and analysis of the experimental results in the actual classroom, the corresponding conclusions are drawn and teaching suggestions are provided for teachers. For each face, a large neighborhood set is firstly selected by the k -nearest neighbor method, and then, the sparse representation of sample points in the neighborhood is obtained, which effectively combines the locality of k -nearest neighbor and the robustness of sparse representation. In the sparse preserving nonnegative block alignment algorithm, a discriminant partial optimization model is constructed by using sparse reconstruction coefficients to describe local geometry and weighted distance to describe class separability. The two algorithms obtain good clustering and recognition results in various cases of real and simulated occlusion, which shows the effectiveness and robustness of the algorithm. In order to verify the reliability of the model, this paper verified the model through in-class practice tests, teachers’ questions, and interviews with students and teachers. The results show that the proposed joint evaluation method based on expression and head-up rate has high accuracy and reliability.


Author(s):  
M. Parisa Beham ◽  
S. M. Mansoor Roomi ◽  
J. Alageshan ◽  
V. Kapileshwaran

Face recognition and authentication are two significant and dynamic research issues in computer vision applications. There are many factors that should be accounted for face recognition; among them pose variation is a major challenge which severely influence in the performance of face recognition. In order to improve the performance, several research methods have been developed to perform the face recognition process with pose invariant conditions in constrained and unconstrained environments. In this paper, the authors analyzed the performance of a popular texture descriptors viz., Local Binary Pattern, Local Derivative Pattern and Histograms of Oriented Gradients for pose invariant problem. State of the art preprocessing techniques such as Discrete Cosine Transform, Difference of Gaussian, Multi Scale Retinex and Gradient face have also been applied before feature extraction. In the recognition phase K- nearest neighbor classifier is used to accomplish the classification task. To evaluate the efficiency of pose invariant face recognition algorithm three publicly available databases viz. UMIST, ORL and LFW datasets have been used. The above said databases have very wide pose variations and it is proved that the state of the art method is efficient only in constrained situations.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Hicham Zaaraoui ◽  
Abderrahim Saaidi ◽  
Rachid El Alami ◽  
Mustapha Abarkan

This paper proposes the use of strings as a new local descriptor for face recognition. The face image is first divided into nonoverlapping subregions from which the strings (words) are extracted using the principle of chain code algorithm and assigned into the nearest words in a dictionary of visual words (DoVW) with the Levenshtein distance (LD) by applying the bag of visual words (BoVW) paradigm. As a result, each region is represented by a histogram of dictionary words. The histograms are then assembled as a face descriptor. Our methodology depends on the path pursued from a starting pixel and do not require a model as the other approaches from the literature. Therefore, the information of the local and global properties of an object is obtained. The recognition is performed by using the nearest neighbor classifier with the Hellinger distance (HD) as a comparison between feature vectors. The experimental results on the ORL and Yale databases demonstrate the efficiency of the proposed approach in terms of preserving information and recognition rate compared to the existing face recognition methods.


2012 ◽  
Vol 224 ◽  
pp. 485-488
Author(s):  
Fei Li ◽  
Yuan Yuan Wang

Abstract: In order to solve the easily copied problem of images in face recognition software, an algorithm combining the image feature with digital watermark is presented in this paper. As watermark information, image feature of the adjacent blocks are embedded to the face image. And primitive face images are not needed when recovering the watermark. So face image integrity can be well confirmed, and the algorithm can detect whether the face image is the original one and identify whether the face image is attacked by malicious aim-such as tampering, replacing or illegally adding. Experimental results show that the algorithm with good invisibility and excellent robustness has no interference on face recognition rate, and it can position the specific tampered location of human face image.


2017 ◽  
Vol 1 (28) ◽  
pp. 56-63
Author(s):  
Khanh Ngan Chau ◽  
Nghi Thanh Doan

Human face recognition is a technology which is widely used in life. There have been much effort on developing face recognition algorithms. In this paper, we present a new methodology that combines Haar Like Features - Cascade of Boosted Classifiers, Dense Scale-Invariant Feature Transform (DSIFT), Local Naive Bayes Nearest Neighbor (LNBNN) algorithm for the recognition of human face. We use Haar Like Features and the combination  of AdaBoost algorithm and Cascade stratified model to detect and extract the face image, the DSIFT descriptors of the image are computed only for the aligned and cropped face image.Then, we apply the LNBNN algorithms for object recognition. Numerical testing on several benchmark datasets using our proposed method for facerecognition gives the better results than other methods. The accuracies obtained by LNBNN method is 99.74 %.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2901
Author(s):  
Wladimir Valenzuela ◽  
Javier E. Soto ◽  
Payman Zarkesh-Ha ◽  
Miguel Figueroa

In this paper, we present the architecture of a smart imaging sensor (SIS) for face recognition, based on a custom-design smart pixel capable of computing local spatial gradients in the analog domain, and a digital coprocessor that performs image classification. The SIS uses spatial gradients to compute a lightweight version of local binary patterns (LBP), which we term ringed LBP (RLBP). Our face recognition method, which is based on Ahonen’s algorithm, operates in three stages: (1) it extracts local image features using RLBP, (2) it computes a feature vector using RLBP histograms, (3) it projects the vector onto a subspace that maximizes class separation and classifies the image using a nearest neighbor criterion. We designed the smart pixel using the TSMC 0.35 μm mixed-signal CMOS process, and evaluated its performance using postlayout parasitic extraction. We also designed and implemented the digital coprocessor on a Xilinx XC7Z020 field-programmable gate array. The smart pixel achieves a fill factor of 34% on the 0.35 μm process and 76% on a 0.18 μm process with 32 μm × 32 μm pixels. The pixel array operates at up to 556 frames per second. The digital coprocessor achieves 96.5% classification accuracy on a database of infrared face images, can classify a 150×80-pixel image in 94 μs, and consumes 71 mW of power.


2019 ◽  
Vol 8 (3) ◽  
pp. 33
Author(s):  
Herman Kh. Omar ◽  
Nada E. Tawfiq

In the recent time bioinformatics take wide field in image processing. Face recognition which is basically the task of recognizing a person based on its facial image. It has become very popular in the last two decades, mainly because of the new methods developed and the high quality of the current visual instruments. There are different types of face recognition algorithms, and each method has a different approach to extract the image features and perform the matching with the input image. In this paper the Local Binary Patterns (LBP) was used, which is a particular case of the Texture Spectrum model, and powerful feature for texture classification. The face recognition system consists of recognizing the faces acquisition from a given data base via two phases. The most useful and unique features of the face image are extracted in the feature extraction phase. In the classification the face image is compared with the images from the database. The proposed algorithm for face recognition in this paper adopt the LBP features encode local texture information with default values. Apply histogram equalization and Resize the image into 80x60, divide it to five blocks, then Save every LBP feature as a vector table. Matlab R2019a was used to build the face recognition system. The Results which obtained are accurate and they are 98.8% overall (500 face image).


Author(s):  
C Hemalatha ◽  
E Logashanmugam

<span>Face recognition system is one of the most interesting studied topics in computer vision for past two decades. Among the other popular biometrics such as the retina, fingerprint, and iris recognition systems, the face recognition is capable of recognizing the uncooperative samples in a non-intrusive manner. Also, it can be applied to many applications of surveillance security, forensics, border control, digital entertainment where face recognition is used in most. In the proposed system an automatic face recognition system is discussed. The proposed recognition system is based on the Dual-Tree M-Band Wavelet Transform (DTMBWT) transform algorithm and features obtained by varying the different filter in the DTMBWT transform. Then the different filter features are classified by means of the K-Nearest Neighbor (KNN) classifier for recognizing the face correctly. The implementation of the system is done by using the ORL face image database, and the performance metrics are calculated.</span>


Sign in / Sign up

Export Citation Format

Share Document