scholarly journals Motion Estimation Using Region-Level Segmentation and Extended Kalman Filter for Autonomous Driving

2021 ◽  
Vol 13 (9) ◽  
pp. 1828
Author(s):  
Hongjian Wei ◽  
Yingping Huang ◽  
Fuzhi Hu ◽  
Baigan Zhao ◽  
Zhiyang Guo ◽  
...  

Motion estimation is crucial to predict where other traffic participants will be at a certain period of time, and accordingly plan the route of the ego-vehicle. This paper presents a novel approach to estimate the motion state by using region-level instance segmentation and extended Kalman filter (EKF). Motion estimation involves three stages of object detection, tracking and parameter estimate. We first use a region-level segmentation to accurately locate the object region for the latter two stages. The region-level segmentation combines color, temporal (optical flow), and spatial (depth) information as the basis for segmentation by using super-pixels and Conditional Random Field. The optical flow is then employed to track the feature points within the object area. In the stage of parameter estimate, we develop a relative motion model of the ego-vehicle and the object, and accordingly establish an EKF model for point tracking and parameter estimate. The EKF model integrates the ego-motion, optical flow, and disparity to generate optimized motion parameters. During tracking and parameter estimate, we apply edge point constraint and consistency constraint to eliminate outliers of tracking points so that the feature points used for tracking are ensured within the object body and the parameter estimates are refined by inner points. Experiments have been conducted on the KITTI dataset, and the results demonstrate that our method presents excellent performance and outperforms the other state-of-the-art methods either in object segmentation and parameter estimate.

2017 ◽  
Vol 10 (13) ◽  
pp. 254
Author(s):  
Ankush Rai ◽  
Jagadeesh Kanna R

This study presents an autonomous driving system based on the principles of trace vectors derived from hyperproperty of a modified optical flowalgorithm. This technique allows keeping track of the past motion vectors by tracking the constraint sets to overcome the non-linear attributes ofthe deformable feature points and motion vectors. The results presented in this work exhibits stable tracking and multi-step prediction in a limitednumber of steps with less training vectors.


2020 ◽  
Vol 9 (4) ◽  
pp. 202
Author(s):  
Junhao Cheng ◽  
Zhi Wang ◽  
Hongyan Zhou ◽  
Li Li ◽  
Jian Yao

Most Simultaneous Localization and Mapping (SLAM) methods assume that environments are static. Such a strong assumption limits the application of most visual SLAM systems. The dynamic objects will cause many wrong data associations during the SLAM process. To address this problem, a novel visual SLAM method that follows the pipeline of feature-based methods called DM-SLAM is proposed in this paper. DM-SLAM combines an instance segmentation network with optical flow information to improve the location accuracy in dynamic environments, which supports monocular, stereo, and RGB-D sensors. It consists of four modules: semantic segmentation, ego-motion estimation, dynamic point detection and a feature-based SLAM framework. The semantic segmentation module obtains pixel-wise segmentation results of potentially dynamic objects, and the ego-motion estimation module calculates the initial pose. In the third module, two different strategies are presented to detect dynamic feature points for RGB-D/stereo and monocular cases. In the first case, the feature points with depth information are reprojected to the current frame. The reprojection offset vectors are used to distinguish the dynamic points. In the other case, we utilize the epipolar constraint to accomplish this task. Furthermore, the static feature points left are fed into the fourth module. The experimental results on the public TUM and KITTI datasets demonstrate that DM-SLAM outperforms the standard visual SLAM baselines in terms of accuracy in highly dynamic environments.


Author(s):  
Vladimir Joukov ◽  
Vincent Bonnet ◽  
Michelle Karg ◽  
Gentiane Venture ◽  
Dana Kulic

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5520
Author(s):  
Ángel Llamazares ◽  
Eduardo Molinos ◽  
Manuel Ocaña ◽  
Vladimir Ivan

The goal of this paper is to improve our previous Dynamic Obstacle Mapping (DOMap) system by means of improving the perception stage. The new system must deal with robots and people as dynamic obstacles using LIght Detection And Range (LIDAR) sensor in order to collect the surrounding information. Although robot movement can be easily tracked by an Extended Kalman Filter (EKF), people’s movement is more unpredictable and it might not be correctly linearized by an EKF. Therefore, to deal with a better estimation of both types of dynamic objects in the local map it is recommended to improve our previous work. The DOMap has been extended in three key points: first the LIDAR reflectivity remission is used to make more robust the matching in the optical flow of the detection stage, secondly static and a dynamic occlusion detectors have been proposed, and finally a tracking stage based on Particle Filter (PF) has been used to deal with robots and people as dynamic obstacles. Therefore, our new improved-DOMap (iDOMap) provides maps with information about occupancy and velocities of the surrounding dynamic obstacles (robots, people, etc.) in a more robust way and they are available to improve the following planning stage.


Sign in / Sign up

Export Citation Format

Share Document