scholarly journals Unconstrained Ear Recognition through Domain Adaptive Deep Learning Models of Convolutional Neural Network

2019 ◽  
Vol 8 (2) ◽  
pp. 3143-3150 ◽  

Limited ear dataset yields to the adaption of domain adaptive deep learning or transfer learning in the development of ear biometric recognition. Ear recognition is a variation of biometrics that is becoming popular in various areas of research due to the advantages of ears towards human identity recognition. In this paper, handpicked CNN architectures: AlexNet, GoogLeNet, Inception-v3, Inception-ResNet-v2, ResNet-18, ResNet-50, SqueezeNet, ShuffleNet, and MobileNet-v2 are explored and compared for use in an unconstrained ear biometric recognition. 250 unconstrained ear images are collected and acquired from the web through web crawlers and are preprocessed with basic image processing methods including the use of contrast limited adaptive histogram equalization for ear image quality improvement. Each CNN architecture is analyzed structurally and are fine-tuned to satisfy the requirements of ear recognition. Earlier layers of CNN architectures are used as feature extractors. Last 2-3 layers of each CNN architectures are fine-tuned thus, are replaced with layers of the same kind for ear recognition models to classify 10 classes of ears instead of 1000. 80 percent of acquired unconstrained ear images is used for training and the remaining 20 percent is reserved for testing and validation. Results of each architectures are compared in terms of their training time, training and validation outputs as such learned features and losses, and test results in terms of above-95% accuracy confidence. Above all the used architectures, ResNet, AlexNet, and GoogleNet achieved an accuracy confidence of 97-100% and is best for use in unconstrained ear biometric recognition while ShuffleNet, despite of achieving approximately 90%, shows promising result for use in mobile version of unconstrained ear biometric recognition.

Author(s):  
A. ALPASLAN ALTUN ◽  
H. ERDINC KOCER ◽  
NOVRUZ ALLAHVERDI

An accuracy level of unimodal biometric recognition system is not very high because of noisy data, limited degrees of freedom, spoof attacks etc. problems. A multimodal biometric system which uses two or more biometric traits of an individual can overcome such problems. We propose a multimodal biometric recognition system that fuses the fingerprint and iris features at the feature extraction level. A feed-forward artificial neural networks (ANNs) model is used for recognition of a person. There is a need to make the training time shorter, so the feature selection level should be performed. A genetic algorithms (GAs) approach is used for feature selection of a combined data. As an experiment, the database of 60 users, 10 fingerprint images and 10 iris images taken from each person, is used. The test results are presented in the last stage of this research.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Anfu Zhu ◽  
Shuaihao Chen ◽  
Fangfang Lu ◽  
Congxiao Ma ◽  
Fengrui Zhang

The defect identification of tunnel lining is a task with a lot of tasks and time-consuming work, and currently, it mainly relies on manual operation. This paper takes the ground-penetrating radar image of the internal defects of the lining as the research object, and chooses the popular VGG16, ResNet34 convolutional neural network (CNN) to build the automatic recognition model for comparative study, and proposes an improved ResNet34 defect-recognition model. In this paper, SGD and Adam training algorithms are used to update network parameters, and the PyTorch depth framework is used to train the network. The test results show that the ResNet34 network has faster convergence speed, higher accuracy rate, and shorter training time than the VGG16 network. The ResNet34 network using the Adam algorithm can achieve 99.08% accuracy. The improved ResNet34 network can achieve an accuracy of 99.25%, and at the same, reduce the parameter amount by 4.22% compared with the ResNet34 network, which can better identify defects in the lining. The research in this paper shows that the deep learning method can provide new ideas for the identification of tunnel lining defects.


Author(s):  
Nashat Alrefai ◽  
Othman Ibrahim

Coronavirus disease 2019 (COVID-19) is a recent global pandemic that has affected many countries around the world, causing serious health problems, especially in the lungs. Although temperature testing is suggested as a firstline test for COVID-19, it was not reliable because many diseases have the same symptoms. Thus, we propose a deep learning method based on X-ray images that used a convolutional neural network (CNN) and transfer learning (TL) for COVID-19 diagnosis, and using gradient-weighted class activation mapping (Grad-CAM) technique for producing visual explanations for the COVID-19 infection area in the lung. The low sample size of coronavirus samples was considered a challenge, thus, this issue was overridden using data augmentation techniques. The study found that the proposed (CNN) and the modified pre-trained networks VGG16 and InceptionV3 achieved a promising result for COVID-19 diagnosis by using chest X-ray images. The proposed CNN was able to differentiate 284 patients with COVID-19 or normal with 98.2 percent for training accuracy and 96.66 percent for test accuracy and 100.0 percent sensitivity. The modified VGG16 achieved the best classification result between all with 100.0 percent for training accuracy and 98.33 percent for test accuracy and 100.0 percent sensitivity, but the proposed CNN overcame the others in the side of reducing the computational complexity and training time significantly.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2021 ◽  
Vol 11 (13) ◽  
pp. 5880
Author(s):  
Paloma Tirado-Martin ◽  
Raul Sanchez-Reillo

Nowadays, Deep Learning tools have been widely applied in biometrics. Electrocardiogram (ECG) biometrics is not the exception. However, the algorithm performances rely heavily on a representative dataset for training. ECGs suffer constant temporal variations, and it is even more relevant to collect databases that can represent these conditions. Nonetheless, the restriction in database publications obstructs further research on this topic. This work was developed with the help of a database that represents potential scenarios in biometric recognition as data was acquired in different days, physical activities and positions. The classification was implemented with a Deep Learning network, BioECG, avoiding complex and time-consuming signal transformations. An exhaustive tuning was completed including variations in enrollment length, improving ECG verification for more complex and realistic biometric conditions. Finally, this work studied one-day and two-days enrollments and their effects. Two-days enrollments resulted in huge general improvements even when verification was accomplished with more unstable signals. EER was improved in 63% when including a change of position, up to almost 99% when visits were in a different day and up to 91% if the user experienced a heartbeat increase after exercise.


2021 ◽  
Vol 7 (5) ◽  
pp. 89
Author(s):  
George K. Sidiropoulos ◽  
Polixeni Kiratsa ◽  
Petros Chatzipetrou ◽  
George A. Papakostas

This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Junichi Tsuchiya ◽  
Kota Yokoyama ◽  
Ken Yamagiwa ◽  
Ryosuke Watanabe ◽  
Koichiro Kimura ◽  
...  

Abstract Background Deep learning (DL)-based image quality improvement is a novel technique based on convolutional neural networks. The aim of this study was to compare the clinical value of 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) images obtained with the DL method with those obtained using a Gaussian filter. Methods Fifty patients with a mean age of 64.4 (range, 19–88) years who underwent 18F-FDG PET/CT between April 2019 and May 2019 were included in the study. PET images were obtained with the DL method in addition to conventional images reconstructed with three-dimensional time of flight-ordered subset expectation maximization and filtered with a Gaussian filter as a baseline for comparison. The reconstructed images were reviewed by two nuclear medicine physicians and scored from 1 (poor) to 5 (excellent) for tumor delineation, overall image quality, and image noise. For the semi-quantitative analysis, standardized uptake values in tumors and healthy tissues were compared between images obtained using the DL method and those obtained with a Gaussian filter. Results Images acquired using the DL method scored significantly higher for tumor delineation, overall image quality, and image noise compared to baseline (P < 0.001). The Fleiss’ kappa value for overall inter-reader agreement was 0.78. The standardized uptake values in tumor obtained by DL were significantly higher than those acquired using a Gaussian filter (P < 0.001). Conclusions Deep learning method improves the quality of PET images.


Author(s):  
Meng-Chieh Lee ◽  
Yu Huang ◽  
Josh Jia-Ching Ying ◽  
Chien Chen ◽  
Vincent S. Tseng

Sign in / Sign up

Export Citation Format

Share Document