Crowd Counting from a Still Image Using Multi-scale Fully Convolutional Network with Adaptive Human-Shaped Kernel

Author(s):  
Jinmeng Cao ◽  
Biao Yang ◽  
Yuyu Zhang ◽  
Ling Zou
Author(s):  
Peiyu Yang ◽  
Guofeng Zhang ◽  
Lu Wang ◽  
Lisheng Xu ◽  
Qingxu Deng ◽  
...  

2019 ◽  
Vol 9 (10) ◽  
pp. 2042 ◽  
Author(s):  
Rachida Tobji ◽  
Wu Di ◽  
Naeem Ayoub

In Deep Learning, recent works show that neural networks have a high potential in the field of biometric security. The advantage of using this type of architecture, in addition to being robust, is that the network learns the characteristic vectors by creating intelligent filters in an automatic way, grace to the layers of convolution. In this paper, we propose an algorithm “FMnet” for iris recognition by using Fully Convolutional Network (FCN) and Multi-scale Convolutional Neural Network (MCNN). By taking into considerations the property of Convolutional Neural Networks to learn and work at different resolutions, our proposed iris recognition method overcomes the existing issues in the classical methods which only use handcrafted features extraction, by performing features extraction and classification together. Our proposed algorithm shows better classification results as compared to the other state-of-the-art iris recognition approaches.


Author(s):  
Yancheng Bai ◽  
Wenjing Ma ◽  
Yucheng Li ◽  
Liangliang Cao ◽  
Wen Guo ◽  
...  

2020 ◽  
Vol 14 (7) ◽  
pp. 443-451
Author(s):  
Suyu Wang ◽  
Bin Yang ◽  
Bo Liu ◽  
Guanghui Zheng

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jingfan Tang ◽  
Meijia Zhou ◽  
Pengfei Li ◽  
Min Zhang ◽  
Ming Jiang

The current crowd counting tasks rely on a fully convolutional network to generate a density map that can achieve good performance. However, due to the crowd occlusion and perspective distortion in the image, the directly generated density map usually neglects the scale information and spatial contact information. To solve it, we proposed MDPDNet (Multiresolution Density maps and Parallel Dilated convolutions’ Network) to reduce the influence of occlusion and distortion on crowd estimation. This network is composed of two modules: (1) the parallel dilated convolution module (PDM) that combines three dilated convolutions in parallel to obtain the deep features on the larger receptive field with fewer parameters while reducing the loss of multiscale information; (2) the multiresolution density map module (MDM) that contains three-branch networks for extracting spatial contact information on three different low-resolution density maps as the feature input of the final crowd density map. Experiments show that MDPDNet achieved excellent results on three mainstream datasets (ShanghaiTech, UCF_CC_50, and UCF-QNRF).


Sign in / Sign up

Export Citation Format

Share Document