MEMF: Multi-level-attention Embedding and Multi-layer-feature Fusion Model for Person Re-identification

2021 ◽  
pp. 107937
Author(s):  
Jia Sun ◽  
Yanfeng Li ◽  
Houjin Chen ◽  
Bin Zhang ◽  
Jinlei Zhu
2019 ◽  
Vol 17 (1) ◽  
pp. 73-81 ◽  
Author(s):  
Shiqin Wang ◽  
Xin Xu ◽  
Lei Liu ◽  
Jing Tian

Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 111
Author(s):  
Shaojun Wu ◽  
Ling Gao

In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).


2019 ◽  
Vol 55 (13) ◽  
pp. 742-745 ◽  
Author(s):  
Kang Yang ◽  
Huihui Song ◽  
Kaihua Zhang ◽  
Jiaqing Fan

2021 ◽  
Vol 191 ◽  
pp. 106479
Author(s):  
Qixin Sun ◽  
Xiujuan Chai ◽  
Zhikang Zeng ◽  
Guomin Zhou ◽  
Tan Sun

Author(s):  
Ying-Xiang Hu ◽  
Rui-Sheng Jia ◽  
Yong-Chao Li ◽  
Qi Zhang ◽  
Hong-Mei Sun

Author(s):  
Arjun Benagatte Channegowda ◽  
H N Prakash

Providing security in biometrics is the major challenging task in the current situation. A lot of research work is going on in this area. Security can be more tightened by using complex security systems, like by using more than one biometric trait for recognition. In this paper multimodal biometric models are developed to improve the recognition rate of a person. The combination of physiological and behavioral biometrics characteristics is used in this work. Fingerprint and signature biometrics characteristics are used to develop a multimodal recognition system. Histograms of oriented gradients (HOG) features are extracted from biometric traits and for these feature fusions are applied at two levels. Features of fingerprint and signatures are fused using concatenation, sum, max, min, and product rule at multilevel stages, these features are used to train deep learning neural network model. In the proposed work, multi-level feature fusion for multimodal biometrics with a deep learning classifier is used and results are analyzed by a varying number of hidden neurons and hidden layers. Experiments are carried out on SDUMLA-HMT, machine learning and data mining lab, Shandong University fingerprint datasets, and MCYT signature biometric recognition group datasets, and encouraging results were obtained.


Sign in / Sign up

Export Citation Format

Share Document