Biometric recognition using fuzzy score level fusion

Author(s):  
S. Bharathi ◽  
R. Sudhakar ◽  
Valentina E. Balas
Author(s):  
Milind E Rane ◽  
Umesh S Bhadade

The paper proposes a t-norm-based matching score fusion approach for a multimodal heterogenous biometric recognition system. Two trait-based multimodal recognition system is developed by using biometrics traits like palmprint and face. First, palmprint and face are pre-processed, extracted features and calculated matching score of each trait using correlation coefficient and combine matching scores using t-norm based score level fusion. Face database like Face 94, Face 95, Face 96, FERET, FRGC and palmprint database like IITD are operated for training and testing of algorithm. The results of experimentation show that the proposed algorithm provides the Genuine Acceptance Rate (GAR) of 99.7% at False Acceptance Rate (FAR) of 0.1% and GAR of 99.2% at FAR of 0.01% significantly improves the accuracy of a biometric recognition system. The proposed algorithm provides the 0.53% more accuracy at FAR of 0.1% and 2.77% more accuracy at FAR of 0.01%, when compared to existing works.


2021 ◽  
Author(s):  
SANTHAM BHARATHY ALAGARSAMY ◽  
Kalpana Murugan

Abstract More than one biometric methodology of an individual is utilized by a multimodal biometric system to moderate a portion of the impediments of a unimodal biometric system and upgrade its precision, security, and so forth. In this paper, an incorporated multimodal biometric system has proposed for the identification of people utilizing ear and face as input and pre-preparing, ring projection, data standardization, AARK limit division, extraction of DWT highlights and classifiers are utilized. Afterward, singular matches gathered from the different modalities produce the individual scores. The proposed framework indicated got brings about the investigations than singular ear and face biometrics tried. To certify the individual as genuine or an impostor, the eventual outcomes are then utilized. On the IIT Delhi ear information base and ORL face data set, the proposed framework has checked and indicated an individual exactness of 96.24%


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5523 ◽  
Author(s):  
Nada Alay ◽  
Heyam H. Al-Baity

With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.


2011 ◽  
Vol 48-49 ◽  
pp. 1010-1013 ◽  
Author(s):  
Yong Li ◽  
Jian Ping Yin ◽  
En Zhu

The performance of biometric systems can be improved by combining multiple units through score level fusion. In this paper, different fusion rules based on match scores are comparatively studied for multi-unit fingerprint recognition. A novel fusion model for multi-unit system is presented first. Based on this model, we analyze the five common score fusion rules: sum, max, min, median and product. Further, we propose a new method: square. Note that the performance of these strategies can complement each other, we introduce the mixed rule: square-sum. We prove that square-sum rule outperforms square and sum rules. The experimental results show good performance of the proposed methods.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yanping Zhang ◽  
Jing Peng ◽  
Xiaohui Yuan ◽  
Lisi Zhang ◽  
Dongzi Zhu ◽  
...  

AbstractRecognizing plant cultivars reliably and efficiently can benefit plant breeders in terms of property rights protection and innovation of germplasm resources. Although leaf image-based methods have been widely adopted in plant species identification, they seldom have been applied in cultivar identification due to the high similarity of leaves among cultivars. Here, we propose an automatic leaf image-based cultivar identification pipeline called MFCIS (Multi-feature Combined Cultivar Identification System), which combines multiple leaf morphological features collected by persistent homology and a convolutional neural network (CNN). Persistent homology, a multiscale and robust method, was employed to extract the topological signatures of leaf shape, texture, and venation details. A CNN-based algorithm, the Xception network, was fine-tuned for extracting high-level leaf image features. For fruit species, we benchmarked the MFCIS pipeline on a sweet cherry (Prunus avium L.) leaf dataset with >5000 leaf images from 88 varieties or unreleased selections and achieved a mean accuracy of 83.52%. For annual crop species, we applied the MFCIS pipeline to a soybean (Glycine max L. Merr.) leaf dataset with 5000 leaf images of 100 cultivars or elite breeding lines collected at five growth periods. The identification models for each growth period were trained independently, and their results were combined using a score-level fusion strategy. The classification accuracy after score-level fusion was 91.4%, which is much higher than the accuracy when utilizing each growth period independently or mixing all growth periods. To facilitate the adoption of the proposed pipelines, we constructed a user-friendly web service, which is freely available at http://www.mfcis.online.


Author(s):  
Saliha Artabaz ◽  
Layth Sliman ◽  
Hachemi Nabil Dellys ◽  
Karima Benatchba ◽  
Mouloud Koudil

Sign in / Sign up

Export Citation Format

Share Document