Ensemble of Deep Convolutional Neural Networks With Gabor Face Representations for Face Recognition

2020 ◽  
Vol 29 ◽  
pp. 3270-3281 ◽  
Author(s):  
Jae Young Choi ◽  
Bumshik Lee
Author(s):  
Ridha Ilyas Bendjillali ◽  
Mohammed Beladgham ◽  
Khaled Merit ◽  
Abdelmalik Taleb-Ahmed

<p><span>In the last decade, facial recognition techniques are considered the most important fields of research in biometric technology. In this research paper, we present a Face Recognition (FR) system divided into three steps: The Viola-Jones face detection algorithm, facial image enhancement using Modified Contrast Limited Adaptive Histogram Equalization algorithm (M-CLAHE), and feature learning for classification. For learning the features followed by classification we used VGG16, ResNet50 and Inception-v3 Convolutional Neural Networks (CNN) architectures for the proposed system. Our experimental work was performed on the Extended Yale B database and CMU PIE face database. Finally, the comparison with the other methods on both databases shows the robustness and effectiveness of the proposed approach. Where the Inception-v3 architecture has achieved a rate of 99, 44% and 99, 89% respectively.</span></p>


2021 ◽  
Vol 9 ◽  
Author(s):  
Hui Liu ◽  
Zi-Hua Mo ◽  
Hang Yang ◽  
Zheng-Fu Zhang ◽  
Dian Hong ◽  
...  

Background: Williams-Beuren syndrome (WBS) is a rare genetic syndrome with a characteristic “elfin” facial gestalt. The “elfin” facial characteristics include a broad forehead, periorbital puffiness, flat nasal bridge, short upturned nose, wide mouth, thick lips, and pointed chin. Recently, deep convolutional neural networks (CNNs) have been successfully applied to facial recognition for diagnosing genetic syndromes. However, there is little research on WBS facial recognition using deep CNNs.Objective: The purpose of this study was to construct an automatic facial recognition model for WBS diagnosis based on deep CNNs.Methods: The study enrolled 104 WBS children, 91 cases with other genetic syndromes, and 145 healthy children. The photo dataset used only one frontal facial photo from each participant. Five face recognition frameworks for WBS were constructed by adopting the VGG-16, VGG-19, ResNet-18, ResNet-34, and MobileNet-V2 architectures, respectively. ImageNet transfer learning was used to avoid over-fitting. The classification performance of the facial recognition models was assessed by five-fold cross validation, and comparison with human experts was performed.Results: The five face recognition frameworks for WBS were constructed. The VGG-19 model achieved the best performance. The accuracy, precision, recall, F1 score, and area under curve (AUC) of the VGG-19 model were 92.7 ± 1.3%, 94.0 ± 5.6%, 81.7 ± 3.6%, 87.2 ± 2.0%, and 89.6 ± 1.3%, respectively. The highest accuracy, precision, recall, F1 score, and AUC of human experts were 82.1, 65.9, 85.6, 74.5, and 83.0%, respectively. The AUCs of each human expert were inferior to the AUCs of the VGG-16 (88.6 ± 3.5%), VGG-19 (89.6 ± 1.3%), ResNet-18 (83.6 ± 8.2%), and ResNet-34 (86.3 ± 4.9%) models.Conclusions: This study highlighted the possibility of using deep CNNs for diagnosing WBS in clinical practice. The facial recognition framework based on VGG-19 could play a prominent role in WBS diagnosis. Transfer learning technology can help to construct facial recognition models of genetic syndromes with small-scale datasets.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document