Robust Multimodal Biometric System Based on Feature Level Fusion of Optimiseddeepnet Features

Author(s):  
Haider Mehraj ◽  
Ajaz Hussain Mir
2021 ◽  
Vol 5 (4) ◽  
pp. 229-250
Author(s):  
Chetana Kamlaskar ◽  
◽  
Aditya Abhyankar ◽  

<abstract><p>For reliable and accurate multimodal biometric based person verification, demands an effective discriminant feature representation and fusion of the extracted relevant information across multiple biometric modalities. In this paper, we propose feature level fusion by adopting the concept of canonical correlation analysis (CCA) to fuse Iris and Fingerprint feature sets of the same person. The uniqueness of this approach is that it extracts maximized correlated features from feature sets of both modalities as effective discriminant information within the features sets. CCA is, therefore, suitable to analyze the underlying relationship between two feature spaces and generates more powerful feature vectors by removing redundant information. We demonstrate that an efficient multimodal recognition can be achieved with a significant reduction in feature dimensions with less computational complexity and recognition time less than one second by exploiting CCA based joint feature fusion and optimization. To evaluate the performance of the proposed system, Left and Right Iris, and thumb Fingerprints from both hands of the SDUMLA-HMT multimodal dataset are considered in this experiment. We show that our proposed approach significantly outperforms in terms of equal error rate (EER) than unimodal system recognition performance. We also demonstrate that CCA based feature fusion excels than the match score level fusion. Further, an exploration of the correlation between Right Iris and Left Fingerprint images (EER of 0.1050%), and Left Iris and Right Fingerprint images (EER of 1.4286%) are also presented to consider the effect of feature dominance and laterality of the selected modalities for the robust multimodal biometric system.</p></abstract>


2020 ◽  
Vol 8 (5) ◽  
pp. 2522-2527

In this paper, we design method for recognition of fingerprint and IRIS using feature level fusion and decision level fusion in Children multimodal biometric system. Initially, Histogram of Gradients (HOG), Gabour and Maximum filter response are extracted from both the domains of fingerprint and IRIS and considered for identification accuracy. The combination of feature vector of all the possible features is recommended by biometrics traits of fusion. For fusion vector the Principal Component Analysis (PCA) is used to select features. The reduced features are fed into fusion classifier of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Navie Bayes(NB). For children multimodal biometric system the suitable combination of features and fusion classifiers is identified. The experimentation conducted on children’s fingerprint and IRIS database and results reveal that fusion combination outperforms individual. In addition the proposed model advances the unimodal biometrics system.


2021 ◽  
Author(s):  
SANTHAM BHARATHY ALAGARSAMY ◽  
Kalpana Murugan

Abstract More than one biometric methodology of an individual is utilized by a multimodal biometric system to moderate a portion of the impediments of a unimodal biometric system and upgrade its precision, security, and so forth. In this paper, an incorporated multimodal biometric system has proposed for the identification of people utilizing ear and face as input and pre-preparing, ring projection, data standardization, AARK limit division, extraction of DWT highlights and classifiers are utilized. Afterward, singular matches gathered from the different modalities produce the individual scores. The proposed framework indicated got brings about the investigations than singular ear and face biometrics tried. To certify the individual as genuine or an impostor, the eventual outcomes are then utilized. On the IIT Delhi ear information base and ORL face data set, the proposed framework has checked and indicated an individual exactness of 96.24%


Sign in / Sign up

Export Citation Format

Share Document