An Image Pre-processing on Iris, Mouth and Palm print using Deep Learning for Biometric Recognition

Author(s):  
J. Vasavi ◽  
M.S. Abirami
2021 ◽  
Vol 11 (13) ◽  
pp. 5880
Author(s):  
Paloma Tirado-Martin ◽  
Raul Sanchez-Reillo

Nowadays, Deep Learning tools have been widely applied in biometrics. Electrocardiogram (ECG) biometrics is not the exception. However, the algorithm performances rely heavily on a representative dataset for training. ECGs suffer constant temporal variations, and it is even more relevant to collect databases that can represent these conditions. Nonetheless, the restriction in database publications obstructs further research on this topic. This work was developed with the help of a database that represents potential scenarios in biometric recognition as data was acquired in different days, physical activities and positions. The classification was implemented with a Deep Learning network, BioECG, avoiding complex and time-consuming signal transformations. An exhaustive tuning was completed including variations in enrollment length, improving ECG verification for more complex and realistic biometric conditions. Finally, this work studied one-day and two-days enrollments and their effects. Two-days enrollments resulted in huge general improvements even when verification was accomplished with more unstable signals. EER was improved in 63% when including a change of position, up to almost 99% when visits were in a different day and up to 91% if the user experienced a heartbeat increase after exercise.


Author(s):  
Dr. I. Jeena Jacob

The biometric recognition plays a significant and a unique part in the applications that are based on the personal identification. This is because of the stability, irreplaceability and the uniqueness that is found in the biometric traits of the humans. Currently the deep learning techniques that are capable of strongly generalizing and automatically learning, with the enhanced accuracy is utilized for the biometric recognition to develop an efficient biometric system. But the poor noise removal abilities and the accuracy degradation caused due to the very small disturbances has made the conventional means of the deep learning that utilizes the convolutional neural network incompatible for the biometric recognition. So the capsule neural network replaces the CNN due to its high accuracy in the recognition and the classification, due to its learning capacities and the ability to be trained with the limited number of samples compared to the CNN (convolutional neural network). The frame work put forward in the paper utilizes the capsule network with the fuzzified image enhancement for the retina based biometric recognition as it is a highly secure and reliable basis of person identification as it is layered behind the eye and cannot be counterfeited. The method was tested with the dataset of face 95 database and the CASIA-Iris-Thousand, and was found to be 99% accurate with the error rate convergence of 0.3% to .5%


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5523 ◽  
Author(s):  
Nada Alay ◽  
Heyam H. Al-Baity

With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.


Author(s):  
Lubab H. Albak ◽  
Raid Rafi Omar Al-Nima ◽  
Arwa Hamid Salih
Keyword(s):  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Raul Garcia-Martin ◽  
Raul Sanchez-Reillo.

2019 ◽  
Vol 8 (2) ◽  
pp. 3143-3150 ◽  

Limited ear dataset yields to the adaption of domain adaptive deep learning or transfer learning in the development of ear biometric recognition. Ear recognition is a variation of biometrics that is becoming popular in various areas of research due to the advantages of ears towards human identity recognition. In this paper, handpicked CNN architectures: AlexNet, GoogLeNet, Inception-v3, Inception-ResNet-v2, ResNet-18, ResNet-50, SqueezeNet, ShuffleNet, and MobileNet-v2 are explored and compared for use in an unconstrained ear biometric recognition. 250 unconstrained ear images are collected and acquired from the web through web crawlers and are preprocessed with basic image processing methods including the use of contrast limited adaptive histogram equalization for ear image quality improvement. Each CNN architecture is analyzed structurally and are fine-tuned to satisfy the requirements of ear recognition. Earlier layers of CNN architectures are used as feature extractors. Last 2-3 layers of each CNN architectures are fine-tuned thus, are replaced with layers of the same kind for ear recognition models to classify 10 classes of ears instead of 1000. 80 percent of acquired unconstrained ear images is used for training and the remaining 20 percent is reserved for testing and validation. Results of each architectures are compared in terms of their training time, training and validation outputs as such learned features and losses, and test results in terms of above-95% accuracy confidence. Above all the used architectures, ResNet, AlexNet, and GoogleNet achieved an accuracy confidence of 97-100% and is best for use in unconstrained ear biometric recognition while ShuffleNet, despite of achieving approximately 90%, shows promising result for use in mobile version of unconstrained ear biometric recognition.


Sign in / Sign up

Export Citation Format

Share Document