scholarly journals Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments

Author(s):  
Michael Darms ◽  
Paul Rybski ◽  
Chris Urmson
Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3714 ◽  
Author(s):  
Guihua Liu ◽  
Weilin Zeng ◽  
Bo Feng ◽  
Feng Xu

Presently, although many impressed SLAM systems have achieved exceptional accuracy in a real environment, most of them are verified in the static environment. However, for mobile robots and autonomous driving, the dynamic objects in the scene can result in tracking failure or large deviation during pose estimation. In this paper, a general visual SLAM system for dynamic scenes with multiple sensors called DMS-SLAM is proposed. First, the combination of GMS and sliding window is used to achieve the initialization of the system, which can eliminate the influence of dynamic objects and construct a static initialization 3D map. Then, the corresponding 3D points of the current frame in the local map are obtained by reprojection. These points are combined with the constant speed model or reference frame model to achieve the position estimation of the current frame and the update of the 3D map points in the local map. Finally, the keyframes selected by the tracking module are combined with the GMS feature matching algorithm to add static 3D map points to the local map. DMS-SLAM implements pose tracking, closed-loop detection and relocalization based on static 3D map points of the local map and supports monocular, stereo and RGB-D visual sensors in dynamic scenes. Exhaustive evaluation in public TUM and KITTI datasets demonstrates that DMS-SLAM outperforms state-of-the-art visual SLAM systems in accuracy and speed in dynamic scenes.


2021 ◽  
Vol 13 (22) ◽  
pp. 4525
Author(s):  
Junjie Zhang ◽  
Kourosh Khoshelham ◽  
Amir Khodabandeh

Accurate and seamless vehicle positioning is fundamental for autonomous driving tasks in urban environments, requiring the provision of high-end measuring devices. Light Detection and Ranging (lidar) sensors, together with Global Navigation Satellite Systems (GNSS) receivers, are therefore commonly found onboard modern vehicles. In this paper, we propose an integration of lidar and GNSS code measurements at the observation level via a mixed measurement model. An Extended Kalman-Filter (EKF) is implemented to capture the dynamic of the vehicle movement, and thus, to incorporate the vehicle velocity parameters into the measurement model. The lidar positioning component is realized using point cloud registration through a deep neural network, which is aided by a high definition (HD) map comprising accurately georeferenced scans of the road environments. Experiments conducted in a densely built-up environment show that, by exploiting the abundant measurements of GNSS and high accuracy of lidar, the proposed vehicle positioning approach can maintain centimeter-to meter-level accuracy for the entirety of the driving duration in urban canyons.


Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 43 ◽  
Author(s):  
Rendong Wang ◽  
Youchun Xu ◽  
Miguel Angel Sotelo ◽  
Yulin Ma ◽  
Thompson Sarkodie-Gyan ◽  
...  

The registration of point clouds in urban environments faces problems such as dynamic vehicles and pedestrians, changeable road environments, and GPS inaccuracies. The state-of-the-art methodologies have usually combined the dynamic object tracking and/or static feature extraction data into a point cloud towards the solution of these problems. However, there is the occurrence of minor initial position errors due to these methodologies. In this paper, the authors propose a fast and robust registration method that exhibits no need for the detection of any dynamic and/or static objects. This proposed methodology may be able to adapt to higher initial errors. The initial steps of this methodology involved the optimization of the object segmentation under the application of a series of constraints. Based on this algorithm, a novel multi-layer nested RANSAC algorithmic framework is proposed to iteratively update the registration results. The robustness and efficiency of this algorithm is demonstrated on several high dynamic scenes of both short and long time intervals with varying initial offsets. A LiDAR odometry experiment was performed on the KITTI data set and our extracted urban data-set with a high dynamic urban road, and the average of the horizontal position errors was compared to the distance traveled that resulted in 0.45% and 0.55% respectively.


2019 ◽  
Vol 4 (2) ◽  
pp. 2235-2241 ◽  
Author(s):  
Ming-Yuan Yu ◽  
Ram Vasudevan ◽  
Matthew Johnson-Roberson

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261053
Author(s):  
Gang Wang ◽  
Saihang Gao ◽  
Han Ding ◽  
Hao Zhang ◽  
Hongmin Cai

Accurate and reliable state estimation and mapping are the foundation of most autonomous driving systems. In recent years, researchers have focused on pose estimation through geometric feature matching. However, most of the works in the literature assume a static scenario. Moreover, a registration based on a geometric feature is vulnerable to the interference of a dynamic object, resulting in a decline of accuracy. With the development of a deep semantic segmentation network, we can conveniently obtain the semantic information from the point cloud in addition to geometric information. Semantic features can be used as an accessory to geometric features that can improve the performance of odometry and loop closure detection. In a more realistic environment, semantic information can filter out dynamic objects in the data, such as pedestrians and vehicles, which lead to information redundancy in generated map and map-based localization failure. In this paper, we propose a method called LiDAR inertial odometry (LIO) with loop closure combined with semantic information (LIO-CSI), which integrates semantic information to facilitate the front-end process as well as loop closure detection. First, we made a local optimization on the semantic labels provided by the Sparse Point-Voxel Neural Architecture Search (SPVNAS) network. The optimized semantic information is combined into the front-end process of tightly-coupled light detection and ranging (LiDAR) inertial odometry via smoothing and mapping (LIO-SAM), which allows us to filter dynamic objects and improve the accuracy of the point cloud registration. Then, we proposed a semantic assisted scan-context method to improve the accuracy and robustness of loop closure detection. The experiments were conducted on an extensively used dataset KITTI and a self-collected dataset on the Jilin University (JLU) campus. The experimental results demonstrate that our method is better than the purely geometric method, especially in dynamic scenarios, and it has a good generalization ability.


Author(s):  
Andreas Hartmannsgruber ◽  
Julien Seitz ◽  
Matthias Schreier ◽  
Matthias Strauss ◽  
Norbert Balbierer ◽  
...  

2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


Sign in / Sign up

Export Citation Format

Share Document