Pedestrian detection system based on stereo vision for mobile robot

Author(s):  
Bodam Nam ◽  
Sung-il Kang ◽  
Hyunki Hong
2020 ◽  
Vol 32 (3) ◽  
pp. 494-502 ◽  
Author(s):  
Tokihiko Akita ◽  
Yuji Yamauchi ◽  
Hironobu Fujiyoshi ◽  
◽  

The frequency of pedestrian traffic accidents continues to increase in Japan. Thus, a driver assistance system is expected to reduce the number of accidents. However, it is difficult for the current environmental recognition sensors to detect crossing pedestrians when turning at intersections, owing to the field of view and the cost. We propose a pedestrian detection system that utilizes surround-view fisheye cameras with a wide field of view. The system can be realized at low cost if the fisheye cameras are already equipped. It is necessary to detect the positioning of pedestrians accurately because more precise prediction of future collision points is required at intersections. Stereo vision is suitable for this purpose. However, there are some concerns regarding realizing stereo vision using fisheye cameras due to the distortion of the lens, asynchronous capturing, and fluctuating camera postures. As a countermeasure, we propose a novel method combining various machine-learning techniques. The D-Brief with histogram of oriented gradients and normalized cross-correlation are combined by a support-vector machine for stereo matching. A random forest was adopted to discriminate the pedestrians from noise in the 3D reconstructed point cloud. We evaluated this for images of crossing pedestrians at actual intersections. A tracking rate of 96.0% was achieved as the evaluation result. It was verified that this algorithm can accurately detect a pedestrian with an average position error of 0.17 m.


2020 ◽  
Vol 14 (10) ◽  
pp. 1319-1327 ◽  
Author(s):  
Pedro Augusto Pinho Ferraz ◽  
Bernardo Augusto Godinho de Oliveira ◽  
Flávia Magalhães Freitas Ferreira ◽  
Carlos Augusto Paiva da Silva Martins

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1820
Author(s):  
Xiaotao Shao ◽  
Qing Wang ◽  
Wei Yang ◽  
Yun Chen ◽  
Yi Xie ◽  
...  

The existing pedestrian detection algorithms cannot effectively extract features of heavily occluded targets which results in lower detection accuracy. To solve the heavy occlusion in crowds, we propose a multi-scale feature pyramid network based on ResNet (MFPN) to enhance the features of occluded targets and improve the detection accuracy. MFPN includes two modules, namely double feature pyramid network (FPN) integrated with ResNet (DFR) and repulsion loss of minimum (RLM). We propose the double FPN which improves the architecture to further enhance the semantic information and contours of occluded pedestrians, and provide a new way for feature extraction of occluded targets. The features extracted by our network can be more separated and clearer, especially those heavily occluded pedestrians. Repulsion loss is introduced to improve the loss function which can keep predicted boxes away from the ground truths of the unrelated targets. Experiments carried out on the public CrowdHuman dataset, we obtain 90.96% AP which yields the best performance, 5.16% AP gains compared to the FPN-ResNet50 baseline. Compared with the state-of-the-art works, the performance of the pedestrian detection system has been boosted with our method.


2018 ◽  
Vol 78 (12) ◽  
pp. 15861-15885 ◽  
Author(s):  
Redouan Lahmyed ◽  
Mohamed El Ansari ◽  
Ayoub Ellahyani

Sign in / Sign up

Export Citation Format

Share Document