An efficient deep learning model for classification of thermal face images

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Basma Abd El-Rahiem ◽  
Ahmed Sedik ◽  
Ghada M. El Banby ◽  
Hani M. Ibrahem ◽  
Mohamed Amin ◽  
...  

PurposeThe objective of this paper is to perform infrared (IR) face recognition efficiently with convolutional neural networks (CNNs). The proposed model in this paper has several advantages such as the automatic feature extraction using convolutional and pooling layers and the ability to distinguish between faces without visual details.Design/methodology/approachA model which comprises five convolutional layers in addition to five max-pooling layers is introduced for the recognition of IR faces.FindingsThe experimental results and analysis reveal high recognition rates of IR faces with the proposed model.Originality/valueA designed CNN model is presented for IR face recognition. Both the feature extraction and classification tasks are incorporated into this model. The problems of low contrast and absence of details in IR images are overcome with the proposed model. The recognition accuracy reaches 100% in experiments on the Terravic Facial IR Database (TFIRDB).

Sensor Review ◽  
2018 ◽  
Vol 38 (3) ◽  
pp. 269-281 ◽  
Author(s):  
Hima Bindu ◽  
Manjunathachari K.

Purpose This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial recognition (FR) systems play a vital part in several applications such as surveillance, access control and image understanding. Accordingly, various face recognition methods have been developed in the literature, but the applicability of these algorithms is restricted because of unsatisfied accuracy. So, the improvement of face recognition is significantly important for the current trend. Design/methodology/approach This paper proposes a face recognition system through feature extraction and classification. The proposed model extracts the local and the global feature of the image. The local features of the image are extracted using the kernel based scale invariant feature transform (K-SIFT) model and the global features are extracted using the proposed m-Co-HOG model. (Co-HOG: co-occurrence histograms of oriented gradients) The proposed m-Co-HOG model has the properties of the Co-HOG algorithm. The feature vector database contains combined local and the global feature vectors derived using the K-SIFT model and the proposed m-Co-HOG algorithm. This paper proposes a probabilistic neuro-fuzzy classifier system for the finding the identity of the person from the extracted feature vector database. Findings The face images required for the simulation of the proposed work are taken from the CVL database. The simulation considers a total of 114 persons form the CVL database. From the results, it is evident that the proposed model has outperformed the existing models with an improved accuracy of 0.98. The false acceptance rate (FAR) and false rejection rate (FRR) values of the proposed model have a low value of 0.01. Originality/value This paper proposes a face recognition system with proposed m-Co-HOG vector and the hybrid neuro-fuzzy classifier. Feature extraction was based on the proposed m-Co-HOG vector for extracting the global features and the existing K-SIFT model for extracting the local features from the face images. The proposed m-Co-HOG vector utilizes the existing Co-HOG model for feature extraction, along with a new color gradient decomposition method. The major advantage of the proposed m-Co-HOG vector is that it utilizes the color features of the image along with other features during the histogram operation.


Author(s):  
Sandip Joardar ◽  
Arnab Sanyal ◽  
Dwaipayan Sen ◽  
Diparnab Sen ◽  
Amitava Chatterjee

2021 ◽  
Vol 9 (2) ◽  
pp. 10-15
Author(s):  
Harendra Singh ◽  
Roop Singh Solanki

In this research paper, a new modified approach is proposed for brain tumor classification as well as feature extraction from Magnetic Resonance Imaging (MRI) after pre-processing of the images. The discrete wavelet transformation (DWT) technique is used for feature extraction from MRI images and Artificial Neural Network (ANN) is used for the classification of the type of tumor according to extracted features. Mean, Standard deviation, Variance, Entropy, Skewness, Homogeneity, Contrast, Correlation are the main features used to classify the type of tumor. The proposed model can give a better result in comparison with other available techniques in less computational time as well as a high degree of accuracy. The training and testing accuracies of the proposed model are 100% and 98.20% with a 98.70 % degree of precision respectively.


2010 ◽  
Vol 139-141 ◽  
pp. 2024-2028
Author(s):  
Jin Qing Liu ◽  
Qun Zhen Fan ◽  
Dong Cao

With the safety awareness strengthened, identification authentication technology has been increasingly concerned. Face recognition is attractive in pattern recognition and artificial intelligence field, and face feature extraction is a very important part in face recognition. This paper first introduced preprocessing of face images, PCA and ICA algorithm. Considering PCA and ICA their respective strengths and weaknesses, then a novel face feature extraction method based on PCA and ICA is proposed. The NN classifier is select to face classification and recognition on the ORL face database. From the actual requirements, the paper analyses hardware platforms based on DM642, and finally use tool CCS software to optimize program and implementation base on DM642 to meet the real-time requirements. Experiments indicated that the modified method is superior to PCA and ICA algorithm.


Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


Sensor Review ◽  
2019 ◽  
Vol 39 (1) ◽  
pp. 107-120 ◽  
Author(s):  
Deepika Kishor Nagthane ◽  
Archana M. Rajurkar

PurposeOne of the main reasons for increase in mortality rate in woman is breast cancer. Accurate early detection of breast cancer seems to be the only solution for diagnosis. In the field of breast cancer research, many new computer-aided diagnosis systems have been developed to reduce the diagnostic test false positives because of the subtle appearance of breast cancer tissues. The purpose of this study is to develop the diagnosis technique for breast cancer using LCFS and TreeHiCARe classifier model.Design/methodology/approachThe proposed diagnosis methodology initiates with the pre-processing procedure. Subsequently, feature extraction is performed. In feature extraction, the image features which preserve the characteristics of the breast tissues are extracted. Consequently, feature selection is performed by the proposed least-mean-square (LMS)-Cuckoo search feature selection (LCFS) algorithm. The feature selection from the vast range of the features extracted from the images is performed with the help of the optimal cut point provided by the LCS algorithm. Then, the image transaction database table is developed using the keywords of the training images and feature vectors. The transaction resembles the itemset and the association rules are generated from the transaction representation based ona priorialgorithm with high conviction ratio and lift. After association rule generation, the proposed TreeHiCARe classifier model emanates in the diagnosis methodology. In TreeHICARe classifier, a new feature index is developed for the selection of a central feature for the decision tree centered on which the classification of images into normal or abnormal is performed.FindingsThe performance of the proposed method is validated over existing works using accuracy, sensitivity and specificity measures. The experimentation of proposed method on Mammographic Image Analysis Society database resulted in classification of normal and abnormal cancerous mammogram images with an accuracy of 0.8289, sensitivity of 0.9333 and specificity of 0.7273.Originality/valueThis paper proposes a new approach for the breast cancer diagnosis system by using mammogram images. The proposed method uses two new algorithms: LCFS and TreeHiCARe. LCFS is used to select optimal feature split points, and TreeHiCARe is the decision tree classifier model based on association rule agreements.


2020 ◽  
Vol 41 ◽  
pp. 106-112 ◽  
Author(s):  
Celia Cintas ◽  
Manuel Lucena ◽  
José Manuel Fuertes ◽  
Claudio Delrieux ◽  
Pablo Navarro ◽  
...  

2020 ◽  
Vol 8 (3) ◽  
pp. 234-238
Author(s):  
Nur Choiriyati ◽  
Yandra Arkeman ◽  
Wisnu Ananta Kusuma

An open challenge in bioinformatics is the analysis of the sequenced metagenomes from the various environments. Several studies demonstrated bacteria classification at the genus level using k-mers as feature extraction where the highest value of k gives better accuracy but it is costly in terms of computational resources and computational time. Spaced k-mers method was used to extract the feature of the sequence using 111 1111 10001 where 1 was a match and 0 was the condition that could be a match or did not match. Currently, deep learning provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. In this research, two different deep learning architectures, namely Deep Neural Network (DNN) and Convolutional Neural Network (CNN), trained to approach the taxonomic classification of metagenome data and spaced k-mers method for feature extraction. The result showed the DNN classifier reached 90.89 % and the CNN classifier reached 88.89 % accuracy at the genus level taxonomy.


2020 ◽  
Vol 3 (2) ◽  
pp. 222-235
Author(s):  
Vivian Nwaocha ◽  
◽  
Ayodele Oloyede ◽  
Deborah Ogunlana ◽  
Michael Adegoke ◽  
...  

Face images undergo considerable amount of variations in pose, facial expression and illumination condition. This large variation in facial appearances of the same individual makes most Existing Face Recognition Systems (E-FRS) lack strong discrimination ability and timely inefficient for face representation due to holistic feature extraction technique used. In this paper, a novel face recognition framework, which is an extension of the standard (PCA) and (ICA) denoted as two-dimensional Principal Component Analysis (2D-PCA) and two-dimensional Independent Component Analysis (2D-ICA) respectively is proposed. The choice of 2D was advantageous as image covariance matrix can be constructed directly using original image matrices. The face images used in this study were acquired from the publicly available ORL and AR Face database. The features belonging to similar class were grouped and correlation calculated in the same order. Each technique was decomposed into different components by employing multi-dimensional grouped empirical mode decomposition using Gaussian function. The nearest neighbor (NN) classifier is used for classification. The results of evaluation showed that the 2D-PCA method using ORL database produced RA of 92.5%, PCA produced RA of 75.00%, ICA produced RA of 77.5%, 2D-ICA produced RA of 96.00%. However, 2D-PCA methods using AR database produced RA of 73.56%, PCA produced RA of 62.41%, ICA produced RA of 66.20%, 2D-ICA method produced RA of 77.45%. This study revealed that the developed face recognition framework algorithm achieves an improvement of 18.5% and 11.25% for the ORL and AR databases respectively as against PCA and ICA feature extraction techniques. Keywords: computer vision, dimensionality reduction techniques, face recognition, pattern recognition


Sign in / Sign up

Export Citation Format

Share Document