scholarly journals Thermal Face Verification through Identification

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3301
Author(s):  
Artur Grudzień ◽  
Marcin Kowalski ◽  
Norbert Pałka

This paper reports on a new approach to face verification in long-wavelength infrared radiation. Two face images were combined into one double image, which was then used as an input for a classification based on neural networks. For testing, we exploited two external and one homemade thermal face databases acquired in various variants. The method is reported to achieve a true acceptance rate of about 83%. We proved that the proposed method outperforms other studied baseline methods by about 20 percentage points. We also analyzed the issue of extending the performance of algorithms. We believe that the proposed double image method can also be applied to other spectral ranges and modalities different than the face.

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4096 ◽  
Author(s):  
Francisco J. Rodriguez-Lozano ◽  
Fernando León-García ◽  
M. Ruiz de Adana ◽  
Jose M. Palomares ◽  
J. Olivares

The temperature of the forehead is known to be highly correlated with the internal body temperature. This area is widely used in thermal comfort systems, lie-detection systems, etc. However, there is a lack of tools to achieve the segmentation of the forehead using thermographic images and non-intrusive methods. In fact, this is usually segmented manually. This work proposes a simple and novel method to segment the forehead region and to extract the average temperature from this area solving this lack of non-user interaction tools. Our method is invariant to the position of the face, and other different morphologies even with the presence of external objects. The results provide an accuracy of 90% compared to the manual segmentation using the coefficient of Jaccard as a metric of similitude. Moreover, due to the simplicity of the proposed method, it can work with real-time constraints at 83 frames per second in embedded systems with low computational resources. Finally, a new dataset of thermal face images is presented, which includes some features which are difficult to find in other sets, such as glasses, beards, moustaches, breathing masks, and different neck rotations and flexions.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Gabriel Hermosilla ◽  
José Luis Verdugo ◽  
Gonzalo Farias ◽  
Esteban Vera ◽  
Francisco Pizarro ◽  
...  

The aim of this study is to propose a system that is capable of recognising the identity of a person, indicating whether the person is drunk using only information extracted from thermal face images. The proposed system is divided into two stages, face recognition and classification. In the face recognition stage, test images are recognised using robust face recognition algorithms: Weber local descriptor (WLD) and local binary pattern (LBP). The classification stage uses Fisher linear discriminant to reduce the dimensionality of the features, and those features are classified using a classifier based on a Gaussian mixture model, creating a classification space for each person, extending the state-of-the-art concept of a “DrunkSpace Classifier.” The system was validated using a new drunk person database, which was specially designed for this work. The main results show that the performance of the face recognition stage was 100% with both algorithms, while the drunk identification saw a performance of 86.96%, which is a very promising result considering 46 individuals for our database in comparison with others that can be found in the literature.


Author(s):  
Fadhlan Hafizhelmi Kamaru Zaman ◽  
Juliana Johari ◽  
Ahmad Ihsan Mohd Yassin

<span>Face verification focuses on the task of determining whether two face images belong to the same identity or not. For unrestricted faces in the wild, this is a very challenging task. Besides significant degradation due to images that have large variations in pose, illumination, expression, aging, and occlusions, it also suffers from large-scale ever-expanding data needed to perform one-to-many recognition task. In this paper, we propose a face verification method by learning face similarities using a Convolutional Neural Networks (ConvNet). Instead of extracting features from each face image separately, our ConvNet model jointly extracts relational visual features from two face images in comparison. We train four hybrid ConvNet models to learn how to distinguish similarities between the face pair of four different face portions and join them at top-layer classifier level. We use binary-class classifier at top-layer level to identify the similarity of face pairs which includes a conventional Multi-Layer Perceptron (MLP), Support Vector Machines (SVM), Native Bayes, and another ConvNet. There are 3 face pairing configurations discussed in this paper. Results from experiments using Labeled face in the Wild (LFW) and CelebA datasets indicate that our hybrid ConvNet increases the face verification accuracy by as much as 27% when compared to individual ConvNet approach. We also found that Lateral face pair configuration yields the best LFW test accuracy on a very strict test protocol without any face alignment using MLP as top-layer classifier at 87.89%, which on-par with the state-of-the-arts. We showed that our approach is more flexible in terms of inferencing the learned models on out-of-sample data by testing LFW and CelebA on either model.</span>


2021 ◽  
Author(s):  
Kent Rosser ◽  
Tran Xuan Bach Nguyen ◽  
Philip Moss ◽  
Javaan Chahl

2021 ◽  
pp. 1-11
Author(s):  
Suphawimon Phawinee ◽  
Jing-Fang Cai ◽  
Zhe-Yu Guo ◽  
Hao-Ze Zheng ◽  
Guan-Chen Chen

Internet of Things is considerably increasing the levels of convenience at homes. The smart door lock is an entry product for smart homes. This work used Raspberry Pi, because of its low cost, as the main control board to apply face recognition technology to a door lock. The installation of the control sensing module with the GPIO expansion function of Raspberry Pi also improved the antitheft mechanism of the door lock. For ease of use, a mobile application (hereafter, app) was developed for users to upload their face images for processing. The app sends the images to Firebase and then the program downloads the images and captures the face as a training set. The face detection system was designed on the basis of machine learning and equipped with a Haar built-in OpenCV graphics recognition program. The system used four training methods: convolutional neural network, VGG-16, VGG-19, and ResNet50. After the training process, the program could recognize the user’s face to open the door lock. A prototype was constructed that could control the door lock and the antitheft system and stream real-time images from the camera to the app.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2001 ◽  
Vol 30 (6) ◽  
pp. 723-727 ◽  
Author(s):  
C. D. Maxey ◽  
M. U. Ahmed ◽  
C. L. Jones ◽  
R. A. Catchpole ◽  
P. Capper ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document