Multi-level feature fusion model-based real-time person re-identification for forensics

2019 ◽  
Vol 17 (1) ◽  
pp. 73-81 ◽  
Author(s):  
Shiqin Wang ◽  
Xin Xu ◽  
Lei Liu ◽  
Jing Tian
2019 ◽  
Vol 55 (13) ◽  
pp. 742-745 ◽  
Author(s):  
Kang Yang ◽  
Huihui Song ◽  
Kaihua Zhang ◽  
Jiaqing Fan

2019 ◽  
Vol 15 (12) ◽  
pp. 6231-6239
Author(s):  
Anil Hazarika ◽  
Pranjal Barman ◽  
Champak Talukdar ◽  
Lachit Dutta ◽  
Abdulhamit Subasi ◽  
...  

2021 ◽  
pp. 107937
Author(s):  
Jia Sun ◽  
Yanfeng Li ◽  
Houjin Chen ◽  
Bin Zhang ◽  
Jinlei Zhu

Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 111
Author(s):  
Shaojun Wu ◽  
Ling Gao

In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).


Sign in / Sign up

Export Citation Format

Share Document