score level fusion
Recently Published Documents


TOTAL DOCUMENTS

238
(FIVE YEARS 63)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 30 (1) ◽  
pp. 161-183
Author(s):  
Annie Anak Joseph ◽  
Alex Ng Ho Lian ◽  
Kuryati Kipli ◽  
Kho Lee Chin ◽  
Dayang Azra Awang Mat ◽  
...  

Nowadays, person recognition has received significant attention due to broad applications in the security system. However, most person recognition systems are implemented based on unimodal biometrics such as face recognition or voice recognition. Biometric systems that adopted unimodal have limitations, mainly when the data contains outliers and corrupted datasets. Multimodal biometric systems grab researchers’ consideration due to their superiority, such as better security than the unimodal biometric system and outstanding recognition efficiency. Therefore, the multimodal biometric system based on face and fingerprint recognition is developed in this paper. First, the multimodal biometric person recognition system is developed based on Convolutional Neural Network (CNN) and ORB (Oriented FAST and Rotated BRIEF) algorithm. Next, two features are fused by using match score level fusion based on Weighted Sum-Rule. The verification process is matched if the fusion score is greater than the pre-set threshold. The algorithm is extensively evaluated on UCI Machine Learning Repository Database datasets, including one real dataset with state-of-the-art approaches. The proposed method achieves a promising result in the person recognition system.


Author(s):  
Md Kamal Uddin ◽  
◽  
Amran Bhuiyan ◽  
Mahmudul Hasan ◽  
◽  
...  

Person re-identification (Re-id) is one of the important tools of video surveillance systems, which aims to recognize an individual across the multiple disjoint sensors of a camera network. Despite the recent advances on RGB camera-based person re-identification methods under normal lighting conditions, Re-id researchers fail to take advantages of modern RGB-D sensor-based additional information (e.g. depth and skeleton information). When traditional RGB-based cameras fail to capture the video under poor illumination conditions, RGB-D sensor-based additional information can be advantageous to tackle these constraints. This work takes depth images and skeleton joint points as additional information along with RGB appearance cues and proposes a person re-identification method. We combine 4-channel RGB-D image features with skeleton information using score-level fusion strategy in dissimilarity space to increase re-identification accuracy. Moreover, our propose method overcomes the illumination problem because we use illumination invariant depth image and skeleton information. We carried out rigorous experiments on two publicly available RGBD-ID re-identification datasets and proved the use of combined features of 4-channel RGB-D images and skeleton information boost up the rank 1 recognition accuracy.


2021 ◽  
pp. 767-781
Author(s):  
S. Deepa ◽  
A. Bhagyalakshmi ◽  
V. Vijaya Chamundeeswari ◽  
S. Godfrey Winster

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1934
Author(s):  
Ja Hyung Koo ◽  
Se Woon Cho ◽  
Na Rae Baek ◽  
Kang Ryoung Park

Human recognition in indoor environments occurs both during the day and at night. During the day, human recognition encounters performance degradation owing to a blur generated when a camera captures a person’s image. However, when images are captured at night with a camera, it is difficult to obtain perfect images of a person without light, and the input images are very noisy owing to the properties of camera sensors in low-illumination environments. Studies have been conducted in the past on face recognition in low-illumination environments; however, there is lack of research on face- and body-based human recognition in very low illumination environments. To solve these problems, this study proposes a modified enlighten generative adversarial network (modified EnlightenGAN) in which a very low illumination image is converted to a normal illumination image, and the matching scores of deep convolutional neural network (CNN) features of the face and body in the converted image are combined with a score-level fusion for recognition. The two types of databases used in this study are the Dongguk face and body database version 3 (DFB-DB3) and the ChokePoint open dataset. The results of the experiment conducted using the two databases show that the human verification accuracy (equal error rate (ERR)) and identification accuracy (rank 1 genuine acceptance rate (GAR)) of the proposed method were 7.291% and 92.67% for DFB-DB3 and 10.59% and 87.78% for the ChokePoint dataset, respectively. Accordingly, the performance of the proposed method was better than the previous methods.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yanping Zhang ◽  
Jing Peng ◽  
Xiaohui Yuan ◽  
Lisi Zhang ◽  
Dongzi Zhu ◽  
...  

AbstractRecognizing plant cultivars reliably and efficiently can benefit plant breeders in terms of property rights protection and innovation of germplasm resources. Although leaf image-based methods have been widely adopted in plant species identification, they seldom have been applied in cultivar identification due to the high similarity of leaves among cultivars. Here, we propose an automatic leaf image-based cultivar identification pipeline called MFCIS (Multi-feature Combined Cultivar Identification System), which combines multiple leaf morphological features collected by persistent homology and a convolutional neural network (CNN). Persistent homology, a multiscale and robust method, was employed to extract the topological signatures of leaf shape, texture, and venation details. A CNN-based algorithm, the Xception network, was fine-tuned for extracting high-level leaf image features. For fruit species, we benchmarked the MFCIS pipeline on a sweet cherry (Prunus avium L.) leaf dataset with >5000 leaf images from 88 varieties or unreleased selections and achieved a mean accuracy of 83.52%. For annual crop species, we applied the MFCIS pipeline to a soybean (Glycine max L. Merr.) leaf dataset with 5000 leaf images of 100 cultivars or elite breeding lines collected at five growth periods. The identification models for each growth period were trained independently, and their results were combined using a score-level fusion strategy. The classification accuracy after score-level fusion was 91.4%, which is much higher than the accuracy when utilizing each growth period independently or mixing all growth periods. To facilitate the adoption of the proposed pipelines, we constructed a user-friendly web service, which is freely available at http://www.mfcis.online.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4896
Author(s):  
Lian Wu ◽  
Yong Xu ◽  
Zhongwei Cui ◽  
Yu Zuo ◽  
Shuping Zhao ◽  
...  

Palmprint recognition has received tremendous research interests due to its outstanding user-friendliness such as non-invasive and good hygiene properties. Most recent palmprint recognition studies such as deep-learning methods usually learn discriminative features from palmprint images, which usually require a large number of labeled samples to achieve a reasonable good recognition performance. However, palmprint images are usually limited because it is relative difficult to collect enough palmprint samples, making most existing deep-learning-based methods ineffective. In this paper, we propose a heuristic palmprint recognition method by extracting triple types of palmprint features without requiring any training samples. We first extract the most important inherent features of a palmprint, including the texture, gradient and direction features, and encode them into triple-type feature codes. Then, we use the block-wise histograms of the triple-type feature codes to form the triple feature descriptors for palmprint representation. Finally, we employ a weighted matching-score level fusion to calculate the similarity between two compared palmprint images of triple-type feature descriptors for palmprint recognition. Extensive experimental results on the three widely used palmprint databases clearly show the promising effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document