scholarly journals Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5523 ◽  
Author(s):  
Nada Alay ◽  
Heyam H. Al-Baity

With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.

2020 ◽  
Vol 185 ◽  
pp. 03035
Author(s):  
Jian Peng ◽  
Jingyi Wu ◽  
Yun Chen

In this paper, we represent a score level fusion method on fingerprint and finger vein. Each unimodal identification system carries out processes of image preprocessing, feature extraction and feature matching to generate a vector of score. And we apply clustering analysis to split the score range into zones of interest. Then a decision tree and weighted-sum approach are used to make the decision. We test the proposed method on standard biometric database. Three metrics, namely, False Accept Rate, False Reject Rate, Recognition Rate, are used to evaluate experimental results. And experimental results show that the fusion system has a better performance than unimodal identification system.


Author(s):  
Dipti Verma ◽  
Sipi Dubey

Nowadays, conventional security method of using passwords can be easily forged by unauthorized person. Hence, biometric cues such as fingerprints, voice, palm print, and face are more preferable for recognition but to preserve the liveliness, another one important biometric trait is vein pattern, which is formed by the subcutaneous blood vessels that contain all the achievable recognition properties. Accordingly, in this paper, we propose a multibiometric system using palm vein, hand vein, and finger vein. Here, Holoentropy-based thresholding mechanism is newly developed for extracting the vein patterns. Also, Fuzzy Brain Storm Optimization (FBSO) method is proposed for score level fusion to achieve the better recognition performance. These two contributions are effectively included in the biometric recognition system and the performance analysis of the proposed method is carried out using the benchmark datasets of palm vein image, finger vein image, and hand vein image. The quantitative results are analyzed with the help of FAR, FRR, and accuracy. From outcome, we proved that the proposed FBSO approach attained a higher accuracy of 81.3% than the existing methods.


Author(s):  
Milind E Rane ◽  
Umesh S Bhadade

The paper proposes a t-norm-based matching score fusion approach for a multimodal heterogenous biometric recognition system. Two trait-based multimodal recognition system is developed by using biometrics traits like palmprint and face. First, palmprint and face are pre-processed, extracted features and calculated matching score of each trait using correlation coefficient and combine matching scores using t-norm based score level fusion. Face database like Face 94, Face 95, Face 96, FERET, FRGC and palmprint database like IITD are operated for training and testing of algorithm. The results of experimentation show that the proposed algorithm provides the Genuine Acceptance Rate (GAR) of 99.7% at False Acceptance Rate (FAR) of 0.1% and GAR of 99.2% at FAR of 0.01% significantly improves the accuracy of a biometric recognition system. The proposed algorithm provides the 0.53% more accuracy at FAR of 0.1% and 2.77% more accuracy at FAR of 0.01%, when compared to existing works.


2021 ◽  
Author(s):  
SANTHAM BHARATHY ALAGARSAMY ◽  
Kalpana Murugan

Abstract More than one biometric methodology of an individual is utilized by a multimodal biometric system to moderate a portion of the impediments of a unimodal biometric system and upgrade its precision, security, and so forth. In this paper, an incorporated multimodal biometric system has proposed for the identification of people utilizing ear and face as input and pre-preparing, ring projection, data standardization, AARK limit division, extraction of DWT highlights and classifiers are utilized. Afterward, singular matches gathered from the different modalities produce the individual scores. The proposed framework indicated got brings about the investigations than singular ear and face biometrics tried. To certify the individual as genuine or an impostor, the eventual outcomes are then utilized. On the IIT Delhi ear information base and ORL face data set, the proposed framework has checked and indicated an individual exactness of 96.24%


Author(s):  
Norah Abdullah Al-johani ◽  
Lamiaa A. Elrefaei

Advancements in biometrics have attained relatively high recognition rates. However, the need for a biometric system that is reliable, robust, and convenient remains. Systems that use palmprints (PP) for verification have a number of benefits including stable line features, reduced distortion and simple self-positioning. Dorsal hand veins (DHVs) are distinctive for every person, such that even identical twins have different DHVs. DHVs appear to maintain stability over time. In the past, different features algorithms were used to implement palmprint (PP) and dorsal hand vein (DHV) systems. Previous systems relied on handcrafted algorithms. The advancements of deep learning (DL) in the features learned by the convolutional neural network (CNN) has led to its application in PP and DHV recognition systems. In this article, a multimodal biometric system based on PP and DHV using (VGG16, VGG19 and AlexNet) CNN models is proposed. The proposed system is uses two approaches: feature level fusion (FLF) and Score level fusion (SLF). In the first approach, the features from PP and DHV are extracted with CNN models. These extracted features are then fused using serial or parallel fusion and used to train error-correcting output codes (ECOC) with a support vector machine (SVM) for classification. In the second approach, the fusion at score level is done with sum, max, and product methods by applying two strategies: Transfer learning that uses CNN models for features extraction and classification for PP and DHV, then score level fusion. For the second strategy, features are extracted with CNN models for PP and DHV and used to train ECOC with SVM for classification, then score level fusion. The system was tested using two DHV databases and one PP database. The multimodal system is tested two times by repeating PP database for each DHV database. The system achieved very high accuracy rate.


Author(s):  
Yina Wu ◽  
Mohamed Abdel-Aty ◽  
Ou Zheng ◽  
Qing Cai ◽  
Shile Zhang

This paper presents an automated traffic safety diagnostics solution named “Automated Roadway Conflict Identification System” (ARCIS) that uses deep learning techniques to process traffic videos collected by unmanned aerial vehicle (UAV). Mask region convolutional neural network (R-CNN) is employed to improve detection of vehicles in UAV videos. The detected vehicles are tracked by a channel and spatial reliability tracking algorithm, and vehicle trajectories are generated based on the tracking algorithm. Missing vehicles can be identified and tracked by identifying stationary vehicles and comparing intersect of union (IOU) between the detection results and the tracking results. Rotated bounding rectangles based on the pixel-to-pixel manner masks that are generated by mask R-CNN detection are introduced to obtain precise vehicle size and location data. Based on the vehicle trajectories, post-encroachment time (PET) is calculated for each conflict event at the pixel level. By comparing the PET values and the threshold, conflicts with the corresponding pixels in which the conflicts happened can be reported. Various conflict types: rear-end, head on, sideswipe, and angle, can also be determined. A case study at a typical signalized intersection is presented; the results indicate that the proposed framework could significantly improve the accuracy of the output data. Moreover, safety diagnostics for the studied intersection are conducted by calculating the PET values for each conflict event. It is expected that the proposed detection and tracking method with UAVs could help diagnose road safety problems efficiently and appropriate countermeasures could then be proposed.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Ujwalla Gawande ◽  
Mukesh Zaveri ◽  
Avichal Kapur

Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.


Sign in / Sign up

Export Citation Format

Share Document