Feature fusion of palmprint and face via tensor analysis and curvelet transform

2012 ◽  
Vol 20 (2) ◽  
Author(s):  
X. Xu ◽  
X. Guan ◽  
D. Zhang ◽  
X. Zhang ◽  
W. Deng ◽  
...  

AbstractIn order to improve the recognition accuracy of the unimodal biometric system and to address the problem of the small samples recognition, a multimodal biometric recognition approach based on feature fusion level and curve tensor is proposed in this paper. The curve tensor approach is an extension of the tensor analysis method based on curvelet coefficients space. We use two kinds of biometrics: palmprint recognition and face recognition. All image features are extracted by using the curve tensor algorithm and then the normalized features are combined at the feature fusion level by using several fusion strategies. The k-nearest neighbour (KNN) classifier is used to determine the final biometric classification. The experimental results demonstrate that the proposed approach outperforms the unimodal solution and the proposed nearly Gaussian fusion (NGF) strategy has a better performance than other fusion rules.

2021 ◽  
Vol 3 (2) ◽  
pp. 131-143
Author(s):  
Vijayakumar T.

Biometric identification technology is widely utilized in our everyday lives as a result of the rising need for information security and safety laws throughout the world. In this aspect, multimodal biometric recognition (MBR) has gained significant research attention due to its ability to overcome several important constraints in unimodal biometric systems. Henceforth, this research article utilizes multiple features such as an iris, face, finger vein, and palm print for obtaining the highest accuracy to identify the exact person. The utilization of multiple features from the person improves the accuracy of biometric system. In many developed countries, palm print features are employed to provide the most accurate identification of an actual individual as fast as possible. The proposed system can be very suitable for the person who dislikes answering many questions for security authentication. Moreover, the proposed system can also be used to minimize the extra questionnaire by achieving a highest accuracy than other existing multimodal biometric systems. Finally, the results are computed and tabulated in this research article.


Recent research in the surface-based ear and palm print recognition additionally shows that ear identification and palm print identification. The surface-based ear and palm print recognition are strong against sign corruption and encoding antiques. Based on these discoveries, further research and look at the comparison of surface descriptors for ear and palm print recognition and try to investigate potential outcomes to supplement surface descriptors with depth data. The proposed Multimodal ear and palm print Biometric Recognition work is based on the feature level fusion. Based on the ear images and palm print images from noticeable brightness as well as profundity records, we remove surface with outside labels starting complete contour images. In this paper, think about the recognition performance of choose strategies for describing the surface structure, which is Local Binary Pattern (LBP), Weber Local Descriptor (WLD), Histogram of oriented gradients (HOG), and Binarised Statistical Image Features (BSIF). The broad test examination dependent scheduled target IIT Delhi-2 ear and IIT Delhi palm print records affirmed to facilitate and expected multimodal biometric framework can build recognition rates contrasted and that delivered by single-modular for example, Unimodal biometrics. The proposed method Histogram of Oriented Gradients (HOG) achieving a recognition rate of 124%


2019 ◽  
Vol 28 (07) ◽  
pp. 1950107 ◽  
Author(s):  
Yassir Aberni ◽  
Larbi Boubchir ◽  
Boubaker Daachi

Multispectral palmprint recognition has been investigated for many problems and applications over the last decade. It has become one of the most well-known biometric recognition systems. Its success is due to the rich features that can be extracted and exploited from the multispectral images of palmprint captured within specific wavelength ranges across the electromagnetic spectrum. This paper provides an overview of recent state-of-the-art multispectral palmprint approaches for person recognition. The approaches surveyed are discussed by describing, in particular, their feature extraction, feature fusion, matching and decision algorithms. Finally, a comparative study to evaluate their performances for both verification and identification modes is addressed.


2021 ◽  
Vol 11 (3) ◽  
pp. 1064
Author(s):  
Jenq-Haur Wang ◽  
Yen-Tsang Wu ◽  
Long Wang

In social networks, users can easily share information and express their opinions. Given the huge amount of data posted by many users, it is difficult to search for relevant information. In addition to individual posts, it would be useful if we can recommend groups of people with similar interests. Past studies on user preference learning focused on single-modal features such as review contents or demographic information of users. However, such information is usually not easy to obtain in most social media without explicit user feedback. In this paper, we propose a multimodal feature fusion approach to implicit user preference prediction which combines text and image features from user posts for recommending similar users in social media. First, we use the convolutional neural network (CNN) and TextCNN models to extract image and text features, respectively. Then, these features are combined using early and late fusion methods as a representation of user preferences. Lastly, a list of users with the most similar preferences are recommended. The experimental results on real-world Instagram data show that the best performance can be achieved when we apply late fusion of individual classification results for images and texts, with the best average top-k accuracy of 0.491. This validates the effectiveness of utilizing deep learning methods for fusing multimodal features to represent social user preferences. Further investigation is needed to verify the performance in different types of social media.


Author(s):  
Young Ho Park ◽  
Dat Nguyen Tien ◽  
Hyeon Chang Lee ◽  
Kang Ryoung Park ◽  
Eui Chul Lee ◽  
...  

Author(s):  
Siyuan Lu ◽  
Di Wu ◽  
Zheng Zhang ◽  
Shui-Hua Wang

The new coronavirus COVID-19 has been spreading all over the world in the last six months, and the death toll is still rising. The accurate diagnosis of COVID-19 is an emergent task as to stop the spreading of the virus. In this paper, we proposed to leverage image feature fusion for the diagnosis of COVID-19 in lung window computed tomography (CT). Initially, ResNet-18 and ResNet-50 were selected as the backbone deep networks to generate corresponding image representations from the CT images. Second, the representative information extracted from the two networks was fused by discriminant correlation analysis to obtain refined image features. Third, three randomized neural networks (RNNs): extreme learning machine, Schmidt neural network and random vector functional-link net, were trained using the refined features, and the predictions of the three RNNs were ensembled to get a more robust classification performance. Experiment results based on five-fold cross validation suggested that our method outperformed state-of-the-art algorithms in the diagnosis of COVID-19.


2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Chao Tang ◽  
Huosheng Hu ◽  
Wenjian Wang ◽  
Wei Li ◽  
Hua Peng ◽  
...  

The representation and selection of action features directly affect the recognition effect of human action recognition methods. Single feature is often affected by human appearance, environment, camera settings, and other factors. Aiming at the problem that the existing multimodal feature fusion methods cannot effectively measure the contribution of different features, this paper proposed a human action recognition method based on RGB-D image features, which makes full use of the multimodal information provided by RGB-D sensors to extract effective human action features. In this paper, three kinds of human action features with different modal information are proposed: RGB-HOG feature based on RGB image information, which has good geometric scale invariance; D-STIP feature based on depth image, which maintains the dynamic characteristics of human motion and has local invariance; and S-JRPF feature-based skeleton information, which has good ability to describe motion space structure. At the same time, multiple K-nearest neighbor classifiers with better generalization ability are used to integrate decision-making classification. The experimental results show that the algorithm achieves ideal recognition results on the public G3D and CAD60 datasets.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1999 ◽  
Author(s):  
Donghang Yu ◽  
Qing Xu ◽  
Haitao Guo ◽  
Chuan Zhao ◽  
Yuzhun Lin ◽  
...  

Classifying remote sensing images is vital for interpreting image content. Presently, remote sensing image scene classification methods using convolutional neural networks have drawbacks, including excessive parameters and heavy calculation costs. More efficient and lightweight CNNs have fewer parameters and calculations, but their classification performance is generally weaker. We propose a more efficient and lightweight convolutional neural network method to improve classification accuracy with a small training dataset. Inspired by fine-grained visual recognition, this study introduces a bilinear convolutional neural network model for scene classification. First, the lightweight convolutional neural network, MobileNetv2, is used to extract deep and abstract image features. Each feature is then transformed into two features with two different convolutional layers. The transformed features are subjected to Hadamard product operation to obtain an enhanced bilinear feature. Finally, the bilinear feature after pooling and normalization is used for classification. Experiments are performed on three widely used datasets: UC Merced, AID, and NWPU-RESISC45. Compared with other state-of-art methods, the proposed method has fewer parameters and calculations, while achieving higher accuracy. By including feature fusion with bilinear pooling, performance and accuracy for remote scene classification can greatly improve. This could be applied to any remote sensing image classification task.


Sign in / Sign up

Export Citation Format

Share Document