ODROID XU4 based implementation of decision level fusion approach for matching computer generated sketches

2016 ◽  
Vol 16 ◽  
pp. 217-224 ◽  
Author(s):  
Steven Lawrence Fernandes ◽  
G. Josemin Bala
Author(s):  
Priti Shivaji Sanjekar ◽  
Jayantrao B. Patil

Multimodal biometrics is the frontier to unimodal biometrics as it integrates the information obtained from multiple biometric sources at various fusion levels i.e. sensor level, feature extraction level, match score level, or decision level. In this article, fingerprint, palmprint, and iris are used for verification of an individual. The wavelet transformation is used to extract features from fingerprint, palmprint, and iris. Further the PCA is used for dimensionality reduction. The fusion of traits is employed at three levels: feature level; feature level combined with match score level; and feature level combined with decision level. The main objective of this research is to observe effect of combined fusion levels on verification of an individual. The performance of three cases of fusion is measured in terms of EER and represented with ROC. The experiments performed on 100 different subjects from publicly available databases demonstrate that combining feature level with match score level and feature level with decision level fusion both outperforms fusion at only a feature level.


2020 ◽  
Vol 4 (3) ◽  
pp. 46
Author(s):  
Mohammad Faridul Haque Siddiqui ◽  
Ahmad Y. Javaid

The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The contribution involves implementing an ensemble-based approach for the AER through the fusion of visible images and infrared (IR) images with speech. The framework is implemented in two layers, where the first layer detects emotions using single modalities while the second layer combines the modalities and classifies emotions. Convolutional Neural Networks (CNN) have been used for feature extraction and classification. A hybrid fusion approach comprising early (feature-level) and late (decision-level) fusion, was applied to combine the features and the decisions at different stages. The output of the CNN trained with voice samples of the RAVDESS database was combined with the image classifier’s output using decision-level fusion to obtain the final decision. An accuracy of 86.36% and similar recall (0.86), precision (0.88), and f-measure (0.87) scores were obtained. A comparison with contemporary work endorsed the competitiveness of the framework with the rationale for exclusivity in attaining this accuracy in wild backgrounds and light-invariant conditions.


2020 ◽  
Vol 8 (5) ◽  
pp. 2522-2527

In this paper, we design method for recognition of fingerprint and IRIS using feature level fusion and decision level fusion in Children multimodal biometric system. Initially, Histogram of Gradients (HOG), Gabour and Maximum filter response are extracted from both the domains of fingerprint and IRIS and considered for identification accuracy. The combination of feature vector of all the possible features is recommended by biometrics traits of fusion. For fusion vector the Principal Component Analysis (PCA) is used to select features. The reduced features are fed into fusion classifier of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Navie Bayes(NB). For children multimodal biometric system the suitable combination of features and fusion classifiers is identified. The experimentation conducted on children’s fingerprint and IRIS database and results reveal that fusion combination outperforms individual. In addition the proposed model advances the unimodal biometrics system.


Sign in / Sign up

Export Citation Format

Share Document