Ear Recognition Based on Fusion of Ear and Tragus Under Different Challenges

Author(s):  
Esraa Alqaralleh ◽  
Önsen Toygar

This paper proposes a 2D ear recognition approach that is based on the fusion of ear and tragus using score-level fusion strategy. An attempt to overcome the effect of partial occlusion, pose variation and weak illumination challenges is done since the accuracy of ear recognition may be reduced if one or more of these challenges are available. In this study, the effect of the aforementioned challenges is estimated separately, and many samples of ear that are affected by two different challenges concurrently are also considered. The tragus is used as a biometric trait because it is often free from occlusion; it also provides discriminative features even in different poses and illuminations. The features are extracted using local binary patterns and the evaluation has been done on three datasets of USTB database. It has been observed that the fusion of ear and tragus can improve the recognition performance compared to the unimodal systems. Experimental results show that the proposed method enhances the recognition rates by fusion of parts that are nonoccluded with tragus in the cases of partial occlusion, pose variation and weak illumination. It is observed that the proposed method performs better than feature-level fusion methods and most of the state-of-the-art ear recognition systems.

Author(s):  
Mina Farmanbar ◽  
Önsen Toygar

This paper proposes hybrid approaches based on both feature level and score level fusion strategies to provide a robust recognition system against the distortions of individual modalities. In order to compare the proposed schemes, a virtual multimodal database is formed from FERET face and PolyU palmprint databases. The proposed hybrid systems concatenate features extracted by local and global feature extraction methods such as Local Binary Patterns, Log Gabor, Principal Component Analysis and Linear Discriminant Analysis. Match score level fusion is performed in order to show the effectiveness and accuracy of the proposed schemes. The experimental results based on these databases reported a significant improvement of the proposed schemes compared with unimodal systems and other multimodal face–palmprint fusion methods.


Author(s):  
MARYAM ESKANDARI ◽  
ÖNSEN TOYGAR ◽  
HASAN DEMIREL

In this paper, a new approach based on score level fusion is presented to obtain a robust recognition system by concatenating face and iris scores of several standard classifiers. The proposed method concatenates face and iris match scores instead of concatenating features as in feature-level fusion. The features from face and iris are extracted using local and global feature extraction methods such as PCA, subspace LDA, spPCA, mPCA and LBP. Transformation-based score fusion and classifier-based score fusion are then involved in the process to obtain, concatenate and classify the matching scores. Different fusion techniques at matching score level, feature level and decision level are compared with the proposed method to emphasize improvement and effectiveness of the proposed method. In order to validate the proposed scheme, a combined database is formed using ORL and BANCA face databases together with CASIA and UBIRIS iris databases. The results based on recognition performance and ROC analysis demonstrate that the proposed score level fusion achieves a significant improvement over unimodal methods and other multimodal face-iris fusion methods.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5523 ◽  
Author(s):  
Nada Alay ◽  
Heyam H. Al-Baity

With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.


Author(s):  
S. M. PRASAD ◽  
V. K. GOVINDAN ◽  
P. S. SATHIDEVI

This paper proposes a wavelet-based palmprint verification approach which is efficient in terms of accuracy and speed. The prominent wavelet domain features such as subband energy distribution, histogram, and co-occurrence features fail to characterize the palmprints sufficiently due to coefficient perturbations caused by translational and/or rotational variations in palmprints. In this work, firstly, a novel approach, termed as adaptive tessellation of subbands, is proposed to effectively capture the spatially localized energy distribution based on the spread of principal lines. Secondly, a set of discriminating features, termed as high scale codes (HSCODEs), and a translation and rotation invariant matching technique are proposed. HSCODEs effectively characterize the palmprints by capturing the spatial patterns corresponding to the low frequency components. Energy features and selected HSCODEs are fused at score and decision levels. Particularly, score level fusion enhances the verification accuracy significantly. Effectiveness of the proposed approach is examined on PolyU-ONLINE-Palmprint-II (PolyU) database. The experimental results show an overall equal error rate (EER) of 0.22%, which is better than the existing wavelet-based palmprint recognition systems and comparable to the computationally complex state-of-the-art approaches. The speed of the approach is high as all the features are extracted from the same wavelet decomposition of palmprint. Further, it is shown that the proposed feature extraction technique can be extended for speech signals as well and such features can be fused with palmprint features for accuracy enhancement.


Author(s):  
Surinder kaur ◽  
Gopal Chaudhary ◽  
Javalkar Dinesh kumar

Nowadays, Biometric systems are prevalent for personal recognition. But due to pandemic COVID 19, it is difficult to pursue a touch-based biometric system. To encourage a touchless biometric system, a less constrained multimodal personal identification system using palmprint and dorsal hand vein is presented. Hand based Touchless recognition system gives a higher user-friendly system and avoids the spread of coronavirus. A method using Convolution Neural Networks(CNN) to extract discriminative features from the data samples is proposed. A pre-trained function PCANeT is used in the experiments to show the performance of the system in fusion scheme. This method doesn’t require keeping the palm in a specific position or at a certain distance like most other papers. Different patches of ROI are used at two different layers of CNN. Fusion of palmprint and dorsal hand vein is done for final result matching. Both Feature level and score level fusion methods are compared. Results shows the accuracy of upto 98.55% and 98.86% and Equal error rate (EER) of upto 1.22% and 0.93% for score level fusion and feature level fusion, respectively. Our method gives higher accurate results in a less constrained environment.


2019 ◽  
Vol 1 (3) ◽  
pp. 1-16
Author(s):  
Musab T. Al-Kaltakchi ◽  
Raid R. Omar ◽  
Hikmat N. Abdullah ◽  
Tingting Han ◽  
Jonathon A. Chambers

Finger Texture (FT) is one of the most recent attractive biometric characteristic. Itrefers to a finger skin area which is restricted between the fingerprint and the palm print (just after including the lower knuckle). Different specifications for the FT can be obtained by employing multiple images spectrum of lights. This inspired the insight of applying a combination between the FT features that acquired by utilizing two various spectrum lightings in order to attain high personal recognitions. Four types of fusion will be listed and explained here: Sensor Level Fusion (SLF), Feature Level Fusion (FLF), Score Level Fusion (ScLF) and Decision Level Fusion (DLF). Each fusion method is employed and examined for an FT verification system. From the database of Multiple Spectrum CASIA (MSCASIA), FT images have been collected. Two types of spectrum lights have been exploited (the wavelength of 460 nm, which represents a Blue (BLU) light, and the White (WHT) light). Supporting comparisons were performed, including the state-of-the-art. Best recognition performance were recorded for the FLF based concatenation rule by improving the Equal Error Rate (EER) percentages from 5% for the BLU and 7% for the WHT to 2%.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yanping Zhang ◽  
Jing Peng ◽  
Xiaohui Yuan ◽  
Lisi Zhang ◽  
Dongzi Zhu ◽  
...  

AbstractRecognizing plant cultivars reliably and efficiently can benefit plant breeders in terms of property rights protection and innovation of germplasm resources. Although leaf image-based methods have been widely adopted in plant species identification, they seldom have been applied in cultivar identification due to the high similarity of leaves among cultivars. Here, we propose an automatic leaf image-based cultivar identification pipeline called MFCIS (Multi-feature Combined Cultivar Identification System), which combines multiple leaf morphological features collected by persistent homology and a convolutional neural network (CNN). Persistent homology, a multiscale and robust method, was employed to extract the topological signatures of leaf shape, texture, and venation details. A CNN-based algorithm, the Xception network, was fine-tuned for extracting high-level leaf image features. For fruit species, we benchmarked the MFCIS pipeline on a sweet cherry (Prunus avium L.) leaf dataset with >5000 leaf images from 88 varieties or unreleased selections and achieved a mean accuracy of 83.52%. For annual crop species, we applied the MFCIS pipeline to a soybean (Glycine max L. Merr.) leaf dataset with 5000 leaf images of 100 cultivars or elite breeding lines collected at five growth periods. The identification models for each growth period were trained independently, and their results were combined using a score-level fusion strategy. The classification accuracy after score-level fusion was 91.4%, which is much higher than the accuracy when utilizing each growth period independently or mixing all growth periods. To facilitate the adoption of the proposed pipelines, we constructed a user-friendly web service, which is freely available at http://www.mfcis.online.


Sign in / Sign up

Export Citation Format

Share Document