Face Recognition Using Completed Local Ternary Pattern (CLTP) Texture Descriptor

Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xin Cheng ◽  
Hongfei Wang ◽  
Jingmei Zhou ◽  
Hui Chang ◽  
Xiangmo Zhao ◽  
...  

For face recognition systems, liveness detection can effectively avoid illegal fraud and improve the safety of face recognition systems. Common face attacks include photo printing and video replay attacks. This paper studied the differences between photos, videos, and real faces in static texture and motion information and proposed a living detection structure based on feature fusion and attention mechanism, Dynamic and Texture Fusion Attention Network (DTFA-Net). We proposed a dynamic information fusion structure of an interchannel attention block to fuse the magnitude and direction of optical flow to extract facial motion features. In addition, for the face detection failure of HOG algorithm under complex illumination, we proposed an improved Gamma image preprocessing algorithm, which effectively improved the face detection ability. We conducted experiments on the CASIA-MFSD and Replay Attack Databases. According to experiments, the DTFA-Net proposed in this paper achieved 6.9% EER on CASIA and 2.2% HTER on Replay Attack that was comparable to other methods.


Author(s):  
ANDREA F. ABATE ◽  
MICHELE NAPPI ◽  
DANIEL RICCIO ◽  
GENOVEFFA TORTORA

During the last few years, many algorithms have been proposed in particular for face recognition using classical 2-D images. However, it is necessary to deal with occlusions when the subject is wearing sunglasses, scarves and such. In the same way, ear recognition is arising as a new promising biometric for people recognition, even if the related literature appears to be somewhat underdeveloped. In this paper, several hybrid face/ear recognition systems are investigated. The system is based on IFS (Iterated Function Systems) theory that are applied on both face and ear resulting in a bimodal architecture. One advantage is that the information used for the indexing and recognition task of face/ear can be made local, and this makes the method more robust to possible occlusions. The distribution of similarities in the input images is exploited as a signature for the identity of the subject. The amount of information provided by each component of the face and the ear image has been assessed, first independently and then jointly. At last, results underline that the system significantly outperforms the existing approaches in the state of the art.


Author(s):  
Thanh-Tam NGUYEN ◽  
Son-Thai LE ◽  
Van-Thuy LE

One of the widely used prominent biometric techniques for identity authentication is Face Recognition. It plays an essential role in many areas, such as daily life, public security, finance, the military, and the smart school. The facial recognition task is identifying or verifying the identity of a person base on their face. The first step is face detection, which detects and locates human faces in images and videos. The face match process then finds an identity of the detected face. In recent years there have been many face recognition systems improving the performance based on deep learning models. Deep learning learns representations of the face based on multiple processing layers with multiple levels of feature extraction. This approach has made sufficient improvement in face recognition since 2014, launched by the breakthroughs of DeepFace and DeepID. However, finding a way to choose the best hyperparameters remains an open question. In this paper, we introduce a method for adaptive hyperparameters selection to improve recognition accuracy. The proposed method achieves improvements on three datasets.


2021 ◽  
Vol 25 (01) ◽  
pp. 80-91
Author(s):  
Saba K. Naji ◽  
◽  
Muthana H. Hamd ◽  

Due to, the great electronic development, which reinforced the need to define people's identities, different methods, and databases to identification people's identities have emerged. In this paper, we compare the results of two texture analysis methods: Local Binary Pattern (LBP) and Local Ternary Pattern (LTP). The comparison based on comparing the extracting facial texture features of 40 and 401 subjects taken from ORL and UFI databases respectively. As well, the comparison has taken in the account using three distance measurements such as; Manhattan Distance (MD), Euclidean Distance (ED), and Cosine Distance (CD). Where the maximum accuracy of the LBP method (99.23%) is obtained with a Manhattan and ORL database, while the LTP method attained (98.76%) using the same distance and database. While, the facial database of UFI shows low quality, which is satisfied 75.98% and 73.82% recognition rates using LBP and LTP respectively with Manhattan distance.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 307 ◽  
Author(s):  
Ngo Tung Son ◽  
Bui Ngoc Anh ◽  
Tran Quy Ban ◽  
Le Phuong Chi ◽  
Bui Dinh Chien ◽  
...  

Face recognition (FR) has received considerable attention in the field of security, especially in the use of closed-circuit television (CCTV) cameras in security monitoring. Although significant advances in the field of computer vision are made, advanced face recognition systems provide satisfactory performance only in controlled conditions. They deteriorate significantly in the face of real-world scenarios such as lighting conditions, motion blur, camera resolution, etc. This article shows how we design, implement, and conduct the empirical comparisons of machine learning open libraries in building attendance taking (AT) support systems using indoor security cameras called ATSS. Our trial system was deployed to record the appearances of 120 students in five classes who study on the third floor of FPT Polytechnic College building. Our design allows for flexible system scaling, and it is not only usable for a school but a generic attendance system with CCTV. The measurement results show that the accuracy is suitable for many different environments.


2015 ◽  
Vol 734 ◽  
pp. 562-567 ◽  
Author(s):  
En Zeng Dong ◽  
Yan Hong Fu ◽  
Ji Gang Tong

This paper proposed a theoretically efficient approach for face recognition based on principal component analysis (PCA) and rotation invariant uniform local binary pattern texture features in order to weaken the effects of varying illumination conditions and facial expressions. Firstly, the rotation invariant uniform LBP operator was adopted to extract the local texture feature of the face images. Then PCA method was used to reduce the dimensionality of the extracted feature and get the eigenfaces. Finally, the nearest distance classification was used to distinguish each face. The method has been accessed on Yale and ATR-Jaffe face databases. Results demonstrate that the proposed method is superior to standard PCA and its recognition rate is higher than the traditional PCA. And the proposed algorithm has strong robustness against the illumination changes, pose, rotation and expressions.


Author(s):  
ZHENXUE CHEN ◽  
CHENGYUN LIU ◽  
FALIANG CHANG ◽  
XUZHEN HAN ◽  
KAIFANG WANG

Changes in light intensity and angle present a major challenge to the creation of reliable face recognition systems. The existence of bright regions and dark regions has been shown to have a serious negative impact on the performance of face recognition systems. This paper proposes a solution to this problem based on self-quotient image (SQI) processing method. In this method, bright and dark areas are processed separately without changing the essential characteristics of the image of the face. The dark and light areas are processed separately by SQI. Experimental results indicate that this Single-Light-Region and Single-Dark-Region SQI method removes the adverse effect of multi-bright and multi-dark areas better than competing methods.


2021 ◽  
Vol 7 (1) ◽  
pp. 10-15
Author(s):  
Lama Akram Ibrahim ◽  
Nasser Nasser ◽  
Majd Ali

Facial recognition has attracted the attention of researchers and has been one of the most prominent topics in the fields of image processing and pattern recognition since 1990. This resulted in a very large number of recognition methods and techniques with the aim of increasing the accuracy and robustness of existing systems. Many techniques have been developed to address the challenges and reliable recognition systems have been reached but require considerable processing time, suffer from high memory consumption and are relatively complex. The focus of this paper is on extracting subset of descriptors (less correlated and less calculations) from the co-occurrence matrix with the goal of enhancing the performance of Haralick’s descriptors. Improvements are achieved by adding the image pre-processing and selecting the proper method according to the database problem and by extracting features from image local regions.


2016 ◽  
Author(s):  
Anya Chakraborty ◽  
Bhismadev Chakrabarti

AbstractWe live in an age of ‘selfies’. Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if visual processing of self-faces is different from other faces, using psychophysics and eye-tracking. Specifically, the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition was tested. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look at lower part of the face for longer duration for self-face compared to other-face. Participants with a reduced overlap between self and other face representations, as indexed by a steeper slope of the psychometric response curve for self-face recognition, spent a greater proportion of time looking at the upper regions of faces identified as self. Additionally, the association of autism-related traits with self-face processing metrics was tested, since autism has previously been associated with atypical self-processing, particularly in the psychological domain. Autistic traits were associated with reduced looking time to both self and other faces. However, no self-face specific association was noted with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.


2021 ◽  
Author(s):  
Susith Hemathilaka ◽  
Achala Aponso

The face mask is an essential sanitaryware in daily lives growing during the pandemic period and is a big threat to current face recognition systems. The masks destroy a lot of details in a large area of face and it makes it difficult to recognize them even for humans. The evaluation report shows the difficulty well when recognizing masked faces. Rapid development and breakthrough of deep learning in the recent past have witnessed most promising results from face recognition algorithms. But they fail to perform far from satisfactory levels in the unconstrained environment during the challenges such as varying lighting conditions, low resolution, facial expressions, pose variation and occlusions. Facial occlusions are considered one of the most intractable problems. Especially when the occlusion occupies a large region of the face because it destroys lots of official features.


Sign in / Sign up

Export Citation Format

Share Document