scholarly journals At the Traffic Intersection, Stopping, or Walking? Pedestrian Path Prediction Based on KPOF-GPDM for Driving Assistance

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Xudong Long ◽  
Weiwei Zhang ◽  
Bo Zhao ◽  
Shaoxing Mo

Pedestrian detection has always been a research hotspot in the Advanced Driving Assistance System (ADAS) with great progress in recent years. However, for the ADAS, we not only need to detect the behavior of pedestrians in front of the vehicle but also predict future action and the motion trajectory. Therefore, in this paper, we propose a human key point combined optical flow network (KPOF-Net) in the vehicle ADAS for the occlusion situation in the actual scene. When the vehicle encounters a blocked pedestrian at a traffic intersection, we used self-flow to estimate the global optical flow in the image sequence and then proposed a White Edge Cutting (WEC) algorithm to remove obstructions and simply modified the generative adversarial network to initialize pedestrians behind the obstructions. Next, we extracted pedestrian optical flow information and human joint point information in parallel, among which we trained four human key point models suitable for traffic intersections. At last, KPOF-GPDM fusion was proposed to predict the future status and walking trajectories of pedestrians, which combined optical flow information with human key point information. In the experiment, we did not merely compare our method with other four representative approaches in the same scene sequences. We also verified the accuracy of the pedestrian motion state and motion trajectory prediction of the system after fusion of human joint points and optical flow information. Taking into account the real-time performance of the system, in the low-speed and barrier-free environment, the comparative analysis only uses optical flow information, human joint point information, and KPOF-Net three prediction models. The results show that (1) in the same traffic environment, our proposed KPOF-Net can predict the change of pedestrian motion state about 5 frames (about 0.26 s) ahead of other excellent systems; (2) at the same time, our system predicts the trajectory of the pedestrian more accurately than the other four systems, which can achieve more stable minimum error ±0.04 m; (3) in a low-speed, barrier-free experimental environment, our proposed trajectory prediction model that integrates human joint points and optical flow information has higher prediction accuracy and smaller fluctuations than a single-information prediction model, and it can be well applied to automobiles’ ADAS.

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 222
Author(s):  
Baigan Zhao ◽  
Yingping Huang ◽  
Hongjian Wei ◽  
Xing Hu

Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in a self-encoding manner. A Recurrent Neural Network is then followed to examine the OF changes, i.e., to conduct sequential learning. The extracted sequential OF subspace is used to compute the regression of the 6-dimensional pose vector. We derive three models with different network structures and different training schemes: LS-CNN-VO, LS-AE-VO, and LS-RCNN-VO. Particularly, we separately train the encoder in an unsupervised manner. By this means, we avoid non-convergence during the training of the whole network and allow more generalized and effective feature representation. Substantial experiments have been conducted on KITTI and Malaga datasets, and the results demonstrate that our LS-RCNN-VO outperforms the existing learning-based VO approaches.


Sign in / Sign up

Export Citation Format

Share Document