scholarly journals DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes

2020 ◽  
Vol 9 (4) ◽  
pp. 202
Author(s):  
Junhao Cheng ◽  
Zhi Wang ◽  
Hongyan Zhou ◽  
Li Li ◽  
Jian Yao

Most Simultaneous Localization and Mapping (SLAM) methods assume that environments are static. Such a strong assumption limits the application of most visual SLAM systems. The dynamic objects will cause many wrong data associations during the SLAM process. To address this problem, a novel visual SLAM method that follows the pipeline of feature-based methods called DM-SLAM is proposed in this paper. DM-SLAM combines an instance segmentation network with optical flow information to improve the location accuracy in dynamic environments, which supports monocular, stereo, and RGB-D sensors. It consists of four modules: semantic segmentation, ego-motion estimation, dynamic point detection and a feature-based SLAM framework. The semantic segmentation module obtains pixel-wise segmentation results of potentially dynamic objects, and the ego-motion estimation module calculates the initial pose. In the third module, two different strategies are presented to detect dynamic feature points for RGB-D/stereo and monocular cases. In the first case, the feature points with depth information are reprojected to the current frame. The reprojection offset vectors are used to distinguish the dynamic points. In the other case, we utilize the epipolar constraint to accomplish this task. Furthermore, the static feature points left are fed into the fourth module. The experimental results on the public TUM and KITTI datasets demonstrate that DM-SLAM outperforms the standard visual SLAM baselines in terms of accuracy in highly dynamic environments.

Author(s):  
Jiahui Huang ◽  
Sheng Yang ◽  
Zishuo Zhao ◽  
Yu-Kun Lai ◽  
Shi-Min Hu

AbstractWe present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments. While recent factor graph based state optimization algorithms have shown their ability to robustly solve SLAM problems by treating dynamic objects as outliers, their dynamic motions are rarely considered. In this paper, we exploit the consensus of 3D motions for landmarks extracted from the same rigid body for clustering, and to identify static and dynamic objects in a unified manner. Specifically, our algorithm builds a noise-aware motion affinity matrix from landmarks, and uses agglomerative clustering to distinguish rigid bodies. Using decoupled factor graph optimization to revise their shapes and trajectories, we obtain an iterative scheme to update both cluster assignments and motion estimation reciprocally. Evaluations on both synthetic scenes and KITTI demonstrate the capability of our approach, and further experiments considering online efficiency also show the effectiveness of our method for simultaneously tracking ego-motion and multiple objects.


Author(s):  
Tomoki HAKOTANI ◽  
Ryota SAKUMA ◽  
Masanao KOEDA ◽  
Akihiro HAMADA ◽  
Atsuro SAWADA ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5889
Author(s):  
Yu Zhang ◽  
Xiping Xu ◽  
Ning Zhang ◽  
Yaowen Lv

When a traditional visual SLAM system works in a dynamic environment, it will be disturbed by dynamic objects and perform poorly. In order to overcome the interference of dynamic objects, we propose a semantic SLAM system for catadioptric panoramic cameras in dynamic environments. A real-time instance segmentation network is used to detect potential moving targets in the panoramic image. In order to find the real dynamic targets, potential moving targets are verified according to the sphere’s epipolar constraints. Then, when extracting feature points, the dynamic objects in the panoramic image are masked. Only static feature points are used to estimate the pose of the panoramic camera, so as to improve the accuracy of pose estimation. In order to verify the performance of our system, experiments were conducted on public data sets. The experiments showed that in a highly dynamic environment, the accuracy of our system is significantly better than traditional algorithms. By calculating the RMSE of the absolute trajectory error, we found that our system performed up to 96.3% better than traditional SLAM. Our catadioptric panoramic camera semantic SLAM system has higher accuracy and robustness in complex dynamic environments.


2014 ◽  
Vol 989-994 ◽  
pp. 2651-2654
Author(s):  
Yan Song ◽  
Bo He

In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3714 ◽  
Author(s):  
Guihua Liu ◽  
Weilin Zeng ◽  
Bo Feng ◽  
Feng Xu

Presently, although many impressed SLAM systems have achieved exceptional accuracy in a real environment, most of them are verified in the static environment. However, for mobile robots and autonomous driving, the dynamic objects in the scene can result in tracking failure or large deviation during pose estimation. In this paper, a general visual SLAM system for dynamic scenes with multiple sensors called DMS-SLAM is proposed. First, the combination of GMS and sliding window is used to achieve the initialization of the system, which can eliminate the influence of dynamic objects and construct a static initialization 3D map. Then, the corresponding 3D points of the current frame in the local map are obtained by reprojection. These points are combined with the constant speed model or reference frame model to achieve the position estimation of the current frame and the update of the 3D map points in the local map. Finally, the keyframes selected by the tracking module are combined with the GMS feature matching algorithm to add static 3D map points to the local map. DMS-SLAM implements pose tracking, closed-loop detection and relocalization based on static 3D map points of the local map and supports monocular, stereo and RGB-D visual sensors in dynamic scenes. Exhaustive evaluation in public TUM and KITTI datasets demonstrates that DMS-SLAM outperforms state-of-the-art visual SLAM systems in accuracy and speed in dynamic scenes.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3699 ◽  
Author(s):  
Masoud S. Bahraini ◽  
Ahmad B. Rad ◽  
Mohammad Bozorg

The important problem of Simultaneous Localization and Mapping (SLAM) in dynamic environments is less studied than the counterpart problem in static settings. In this paper, we present a solution for the feature-based SLAM problem in dynamic environments. We propose an algorithm that integrates SLAM with multi-target tracking (SLAMMTT) using a robust feature-tracking algorithm for dynamic environments. A novel implementation of RANdomSAmple Consensus (RANSAC) method referred to as multilevel-RANSAC (ML-RANSAC) within the Extended Kalman Filter (EKF) framework is applied for multi-target tracking (MTT). We also apply machine learning to detect features from the input data and to distinguish moving from stationary objects. The data stream from LIDAR and vision sensors are fused in real-time to detect objects and depth information. A practical experiment is designed to verify the performance of the algorithm in a dynamic environment. The unique feature of this algorithm is its ability to maintain tracking of features even when the observations are intermittent whereby many reported algorithms fail in such situations. Experimental validation indicates that the algorithm is able to perform consistent estimates in a fast and robust manner suggesting its feasibility for real-time applications.


2021 ◽  
Vol 13 (9) ◽  
pp. 1828
Author(s):  
Hongjian Wei ◽  
Yingping Huang ◽  
Fuzhi Hu ◽  
Baigan Zhao ◽  
Zhiyang Guo ◽  
...  

Motion estimation is crucial to predict where other traffic participants will be at a certain period of time, and accordingly plan the route of the ego-vehicle. This paper presents a novel approach to estimate the motion state by using region-level instance segmentation and extended Kalman filter (EKF). Motion estimation involves three stages of object detection, tracking and parameter estimate. We first use a region-level segmentation to accurately locate the object region for the latter two stages. The region-level segmentation combines color, temporal (optical flow), and spatial (depth) information as the basis for segmentation by using super-pixels and Conditional Random Field. The optical flow is then employed to track the feature points within the object area. In the stage of parameter estimate, we develop a relative motion model of the ego-vehicle and the object, and accordingly establish an EKF model for point tracking and parameter estimate. The EKF model integrates the ego-motion, optical flow, and disparity to generate optimized motion parameters. During tracking and parameter estimate, we apply edge point constraint and consistency constraint to eliminate outliers of tracking points so that the feature points used for tracking are ensured within the object body and the parameter estimates are refined by inner points. Experiments have been conducted on the KITTI dataset, and the results demonstrate that our method presents excellent performance and outperforms the other state-of-the-art methods either in object segmentation and parameter estimate.


2021 ◽  
Vol 205 ◽  
pp. 106085
Author(s):  
Monire Sheikh Hosseini ◽  
Mahammad Hassan Moradi ◽  
Mahdi Tabassian ◽  
Jan D'hooge

2021 ◽  
Vol 11 (4) ◽  
pp. 1373
Author(s):  
Jingyu Zhang ◽  
Zhen Liu ◽  
Guangjun Zhang

Pose measurement is a necessary technology for UAV navigation. Accurate pose measurement is the most important guarantee for a UAV stable flight. UAV pose measurement methods mostly use image matching with aircraft models or 2D points corresponding with 3D points. These methods will lead to pose measurement errors due to inaccurate contour and key feature point extraction. In order to solve these problems, a pose measurement method based on the structural characteristics of aircraft rigid skeleton is proposed in this paper. The depth information is introduced to guide and label the 2D feature points to eliminate the feature mismatch and segment the region. The space points obtained from the marked feature points fit the space linear equation of the rigid skeleton, and the UAV attitude is calculated by combining with the geometric model. This method does not need cooperative identification of the aircraft model, and can stably measure the position and attitude of short-range UAV in various environments. The effectiveness and reliability of the proposed method are verified by experiments on a visual simulation platform. The method proposed can prevent aircraft collision and ensure the safety of UAV navigation in autonomous refueling or formation flight.


Sign in / Sign up

Export Citation Format

Share Document