Two Feature-Level Fusion Methods with Feature Scaling and Hashing for Multimodal Biometrics

2016 ◽  
Vol 34 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Ren-He Jeng ◽  
Wen-Shiung Chen
Author(s):  
Zibo Meng ◽  
Shizhong Han ◽  
Min Chen ◽  
Yan Tong

Recognizing facial actions is challenging, especially when they are accompanied with speech. Instead of employing information solely from the visual channel, this work aims to exploit information from both visual and audio channels in recognizing speech-related facial action units (AUs). In this work, two feature-level fusion methods are proposed. The first method is based on a kind of human-crafted visual feature. The other method utilizes visual features learned by a deep convolutional neural network (CNN). For both methods, features are independently extracted from visual and audio channels and aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Experimental results on a new audiovisual AU-coded dataset have demonstrated that both fusion methods outperform their visual counterparts in recognizing speech-related AUs. The improvement is more impressive with occlusions on the facial images, which would not affect the audio channel.


In biometric system, multimodal biometrics provides stronger security as compared to unimodal biometrics. Even though multimodal biometric improves the accuracy and reliability of the system, but requires large memory storage and consumes numerous execution time due to use of high dimensionality datasets. Search is being an NP-hard problem in biometrics, which garnish an attention for research in biometric system. Due to NP-hard nature of searching in biometric, accurate solutions could not be discovered in limited time. Therefore, researchers use heuristic or random search methods such as PSO, GA, ACO and Cuckoo search etc. to obtain optimal or approximate optimal solutions for such problems. This paper proposes a hybrid approach of feature level fusion in biometric system with Ant Colony Optimization based feature sub selection method to aiming to improve performance. The median filter and morphological operations are used for pre-processing of finger vein and fingerprint images respectively. Confusion matrix plot with equal error rate and accuracy are the evaluation parameters.


2020 ◽  
Vol 5 (2) ◽  
pp. 9-15
Author(s):  
Shweta Singh ◽  
Ravi Jaiswal ◽  
Siddharth Srivastava

Computers ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 21
Author(s):  
Mehwish Leghari ◽  
Shahzad Memon ◽  
Lachhman Das Dhomeja ◽  
Akhtar Hussain Jalbani ◽  
Asghar Ali Chandio

The extensive research in the field of multimodal biometrics by the research community and the advent of modern technology has compelled the use of multimodal biometrics in real life applications. Biometric systems that are based on a single modality have many constraints like noise, less universality, intra class variations and spoof attacks. On the other hand, multimodal biometric systems are gaining greater attention because of their high accuracy, increased reliability and enhanced security. This research paper proposes and develops a Convolutional Neural Network (CNN) based model for the feature level fusion of fingerprint and online signature. Two types of feature level fusion schemes for the fingerprint and online signature have been implemented in this paper. The first scheme named early fusion combines the features of fingerprints and online signatures before the fully connected layers, while the second fusion scheme named late fusion combines the features after fully connected layers. To train and test the proposed model, a new multimodal dataset consisting of 1400 samples of fingerprints and 1400 samples of online signatures from 280 subjects was collected. To train the proposed model more effectively, the size of the training data was further increased using augmentation techniques. The experimental results show an accuracy of 99.10% achieved with early feature fusion scheme, while 98.35% was achieved with late feature fusion scheme.


Sign in / Sign up

Export Citation Format

Share Document