dense crowd
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 33)

H-INDEX

11
(FIVE YEARS 3)

2022 ◽  
Vol 355 ◽  
pp. 03020
Author(s):  
Yitong Mao

The real-time pedestrian detection algorithm requires the model to be lightweight and robust. At the same time, the pedestrian object detection problem has the characteristics of aerial view Angle shooting, object overlap and weak light, etc. In order to design a more robust real-time detection model in weak light and crowded scene, this paper based on YOLO, raised a more efficient convolutional network. The experimental results show that, compared with YOLOX Network, the improved YOLO Network has a better detection effect in the lack of light scene and dense crowd scene, has a 5.0% advantage over YOLOX-s for pedestrians AP index, and has a 44.2% advantage over YOLOX-s for fps index.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Rui Zhu ◽  
Kangning Yin ◽  
Hang Xiong ◽  
Hailian Tang ◽  
Guangqiang Yin

Wearing masks is an effective and simple method to prevent the spread of the COVID-19 pandemic in public places, such as train stations, classrooms, and streets. It is of positive significance to urge people to wear masks with computer vision technology. However, the existing detection methods are mainly for simple scenes, and facial missing detection is prone to occur in dense crowds with different scales and occlusions. Moreover, the data obtained by surveillance cameras in public places are difficult to be collected for centralized training, due to the privacy of individuals. In order to solve these problems, a cascaded network is proposed: the first level is the Dilation RetinaNet Face Location (DRFL) Network, which contains Enhanced Receptive Field Context (ERFC) module with the dilation convolution, aiming to reduce network parameters and locate faces of different scales. In order to adapt to embedded camera devices, the second level is the SRNet20 network, which is created by Neural Architecture Search (NAS). Due to privacy protection, it is difficult for surveillance video to share in practice, so our SRNet20 network is trained in federated learning. Meanwhile, we have made a masked face dataset containing about 20,000 images. Finally, the experiments highlight that the detection mAP of the face location is 90.6% on the Wider Face dataset, and the classification mAP of the masked face classification is 98.5% on the dataset we made, which means our cascaded network can detect masked faces in dense crowd scenes well.


Author(s):  
Greg Olmschenk ◽  
Xuan Wang ◽  
Hao Tang ◽  
Zhigang Zhu

Gatherings of thousands to millions of people frequently occur for an enormous variety of educational, social, sporting, and political events, and automated counting of these high-density crowds is useful for safety, management, and measuring significance of an event. In this work, we show that the regularly accepted labeling scheme of crowd density maps for training deep neural networks may not be the most effective one. We propose an alternative inverse k-nearest neighbor (i[Formula: see text]NN) map mechanism that, even when used directly in existing state-of-the-art network structures, shows superior performance. We also provide new network architecture mechanisms that we demonstrate in our own MUD-i[Formula: see text]NN network architecture, which uses multi-scale drop-in replacement upsampling via transposed convolutions to take full advantage of the provided i[Formula: see text]NN labeling. This upsampling combined with the i[Formula: see text]NN maps further improves crowd counting accuracy. We further analyze several variations of the i[Formula: see text]NN labeling mechanism, which apply transformations on the [Formula: see text]NN measure before generating the map, in order to consider the impact of camera perspective views, image resolutions, and the changing rates of the mapping functions. To alleviate the effects of crowd density changes in each image, we also introduce an attenuation mechanism in the i[Formula: see text]NN mapping. Experimentally, we show that inverse square root [Formula: see text]NN map variation (iR[Formula: see text]NN) provides the best performance. Discussions are provided on computational complexity, label resolutions, the gains in mapping and upsampling, and details of critical cases such as various crowd counts, uneven crowd densities, and crowd occlusions.


2021 ◽  
Author(s):  
Ramana Sundararaman ◽  
Cedric De Almeida Braga ◽  
Eric Marchand ◽  
Julien Pettre
Keyword(s):  

SIMULATION ◽  
2021 ◽  
pp. 003754972110031
Author(s):  
Omar Hesham ◽  
Gabriel Wainer

Computer simulation of dense crowds is finding increased use in event planning, congestion prediction, and threat assessment. State-of-the-art particle-based crowd methods assume and aim for collision-free trajectories. That is an idealistic yet not overly realistic expectation, as near-collisions increase in dense and rushed settings compared with typically sparse pedestrian scenarios. Centroidal particle dynamics (CPD) is a method we defined that explicitly models the compressible personal space area surrounding each entity to inform its local pathing and collision-avoidance decisions. We illustrate how our proposed agent-based method for local dynamics can reproduce several key emergent dense crowd phenomena at the microscopic level with higher congruence to real trajectory data and with more visually convincing collision-avoidance paths than the existing state of the art. We present advanced models in which we consider distraction of the pedestrians in the crowd, flocking behavior, interaction with vehicles (ambulances, police) and other advanced models that show that emergent behavior in the simulated crowds is similar to the behavior observed in reality. We discuss how to increase confidence in CPD, potentially making it also suitable for use in safety-critical applications, including urban design, evacuation analysis, and crowd-safety planning.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 136032-136044
Author(s):  
Liangjun Huang ◽  
Luning Zhu ◽  
Shihui Shen ◽  
Qing Zhang ◽  
Jianwei Zhang

Sign in / Sign up

Export Citation Format

Share Document