scholarly journals Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3559 ◽  
Author(s):  
Runzhi Wang ◽  
Kaichang Di ◽  
Wenhui Wan ◽  
Yongkang Wang

In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features—point and line segments—have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4604
Author(s):  
Fei Zhou ◽  
Limin Zhang ◽  
Chaolong Deng ◽  
Xinyue Fan

Traditional visual simultaneous localization and mapping (SLAM) systems rely on point features to estimate camera trajectories. However, feature-based systems are usually not robust in complex environments such as weak textures or obvious brightness changes. To solve this problem, we used more environmental structure information by introducing line segments features and designed a monocular visual SLAM system. This system combines points and line segments to effectively make up for the shortcomings of traditional positioning based only on point features. First, ORB algorithm based on local adaptive threshold was proposed. Subsequently, we not only optimized the extracted line features, but also added a screening step before the traditional descriptor matching to combine the point features matching results with the line features matching. Finally, the weighting idea was introduced. When constructing the optimized cost function, we allocated weights reasonably according to the richness and dispersion of features. Our evaluation on publicly available datasets demonstrated that the improved point-line feature method is competitive with the state-of-the-art methods. In addition, the trajectory graph significantly reduced drift and loss, which proves that our system increases the robustness of SLAM.


Author(s):  
S. Cheng ◽  
J. Yang ◽  
Z. Kang ◽  
P. H. Akwensi

<p><strong>Abstract.</strong> Since Global Navigation Satellite System may be unavailable in complex dynamic environments, visual SLAM systems have gained importance in robotics and its applications in recent years. The SLAM system based on point feature tracking shows strong robustness in many scenarios. Nevertheless, point features over images might be limited in quantity or not well distributed in low-textured scenes, which makes the behaviour of these approaches deteriorate. Compared with point features, line features as higher-dimensional features can provide more environmental information in complex scenes. As a matter of fact, line segments are usually sufficient in any human-made environment, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual SLAM systems. Therefore, this paper develops a scene-assisted point-line feature based visual SLAM method for autonomous flight in unknown indoor environments. First, ORB point features and Line Segment Detector (LSD)-based line features are extracted and matched respectively to build two types of projection models. Second, in order to effectively combine point and line features, a Convolutional Neural Network (CNN)-based model is pre-trained based on the scene characteristics for weighting their associated projection errors. Finally, camera motion is estimated through non-linear minimization of the weighted projection errors between the correspondent observed features and those projected from previous frames. To evaluate the performance of the proposed method, experiments were conducted on the public EuRoc dataset. Experimental results indicate that the proposed method outperforms the conventional point-line feature based visual SLAM method in localization accuracy, especially in low-textured scenes.</p>


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142090419 ◽  
Author(s):  
Baofu Fang ◽  
Zhiqiang Zhan

Visual simultaneous localization and mapping (SLAM) is well-known to be one of the research areas in robotics. There are many challenges in traditional point feature-based approaches, such as insufficient point features, motion jitter, and low localization accuracy in low-texture scenes, which reduce the performance of the algorithms. In this article, we propose an RGB-D SLAM system to handle these situations, which is named Point-Line Fusion (PLF)-SLAM. We utilize both points and line segments throughout the process of our work. Specifically, we present a new line segment extraction method to solve the overlap or branch problem of the line segments, and then a more rigorous screening mechanism is proposed in the line matching section. Instead of minimizing the reprojection error of points, we introduce the reprojection error based on points and lines to get a more accurate tracking pose. In addition, we come up with a solution to handle the jitter frame, which greatly improves tracking success rate and availability of the system. We thoroughly evaluate our system on the Technische Universität München (TUM) RGB-D benchmark and compare it with ORB-SLAM2, presumably the current state-of-the-art solution. The experiments show that our system has better accuracy and robustness compared to the ORB-SLAM2.


2014 ◽  
Vol 989-994 ◽  
pp. 2651-2654
Author(s):  
Yan Song ◽  
Bo He

In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4922
Author(s):  
Like Cao ◽  
Jie Ling ◽  
Xiaohui Xiao

Noise appears in images captured by real cameras. This paper studies the influence of noise on monocular feature-based visual Simultaneous Localization and Mapping (SLAM). First, an open-source synthetic dataset with different noise levels is introduced in this paper. Then the images in the dataset are denoised using the Fast and Flexible Denoising convolutional neural Network (FFDNet); the matching performances of Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and Oriented FAST and Rotated BRIEF (ORB) which are commonly used in feature-based SLAM are analyzed in comparison and the results show that ORB has a higher correct matching rate than that of SIFT and SURF, the denoised images have a higher correct matching rate than noisy images. Next, the Absolute Trajectory Error (ATE) of noisy and denoised sequences are evaluated on ORB-SLAM2 and the results show that the denoised sequences perform better than the noisy sequences at any noise level. Finally, the completely clean sequence in the dataset and the sequences in the KITTI dataset are denoised and compared with the original sequence through comprehensive experiments. For the clean sequence, the Root-Mean-Square Error (RMSE) of ATE after denoising has decreased by 16.75%; for KITTI sequences, 7 out of 10 sequences have lower RMSE than the original sequences. The results show that the denoised image can achieve higher accuracy in the monocular feature-based visual SLAM under certain conditions.


2020 ◽  
Author(s):  
Hudson Bruno ◽  
Esther Colombini

The Simultaneous Localization and Mapping (SLAM) problem addresses the possibility of a robot to localize itself in an unknown environment and simultaneously build a consistent map of this environment. Recently, cameras have been successfully used to get the environment’s features to perform SLAM, which is referred to as visual SLAM (VSLAM). However, classical VSLAM algorithms can be easily induced to fail when the robot motion or the environment is too challenging. Although new approaches based on Deep Neural Networks (DNNs) have achieved promising results in VSLAM, they still are unable to outperform traditional methods. To leverage the robustness of deep learning to enhance traditional VSLAM systems, we propose to combine the potential of deep learning-based feature descriptors with the traditional geometry-based VSLAM, building a new VSLAM system called LIFT-SLAM. Experiments conducted on KITTI and Euroc datasets show that deep learning can be used to improve the performance of traditional VSLAM systems, as the proposed approach was able to achieve results comparable to the state-of-the-art while being robust to sensorial noise. We enhance the proposed VSLAM pipeline by avoiding parameter tuning for specific datasets with an adaptive approach while evaluating how transfer learning can affect the quality of the features extracted.


Drones ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 23
Author(s):  
Tong Zhang ◽  
Chunjiang Liu ◽  
Jiaqi Li ◽  
Minghui Pang ◽  
Mingang Wang

In view of traditional point-line feature visual inertial simultaneous localization and mapping (SLAM) system, which has weak performance in accuracy so that it cannot be processed in real time under the condition of weak indoor texture and light and shade change, this paper proposes an inertial SLAM method based on point-line vision for indoor weak texture and illumination. Firstly, based on Bilateral Filtering, we apply the Speeded Up Robust Features (SURF) point feature extraction and Fast Nearest neighbor (FLANN) algorithms to improve the robustness of point feature extraction result. Secondly, we establish a minimum density threshold and length suppression parameter selection strategy of line feature, and take the geometric constraint line feature matching into consideration to improve the efficiency of processing line feature. And the parameters and biases of visual inertia are initialized based on maximum posterior estimation method. Finally, the simulation experiments are compared with the traditional tightly-coupled monocular visual–inertial odometry using point and line features (PL-VIO) algorithm. The simulation results demonstrate that the proposed an inertial SLAM method based on point-line vision for indoor weak texture and illumination can be effectively operated in real time, and its positioning accuracy is 22% higher on average and 40% higher in the scenario that illumination changes and blurred image.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2468
Author(s):  
Ri Lin ◽  
Feng Zhang ◽  
Dejun Li ◽  
Mingwei Lin ◽  
Gengli Zhou ◽  
...  

Docking technology for autonomous underwater vehicles (AUVs) involves energy supply, data exchange and navigation, and plays an important role to extend the endurance of the AUVs. The navigation method used in the transition between AUV homing and docking influences subsequent tasks. How to improve the accuracy of the navigation in this stage is important. However, when using ultra-short baseline (USBL), outliers and slow localization updating rates could possibly cause localization errors. Optical navigation methods using underwater lights and cameras are easily affected by the ambient light. All these may reduce the rate of successful docking. In this paper, research on an improved localization method based on multi-sensor information fusion is carried out. To improve the localization performance of AUVs under motion mutation and light variation conditions, an improved underwater simultaneous localization and mapping algorithm based on ORB features (IU-ORBSALM) is proposed. A nonlinear optimization method is proposed to optimize the scale of monocular visual odometry in IU-ORBSLAM and the AUV pose. Localization tests and five docking missions are executed in a swimming pool. The localization results indicate that the localization accuracy and update rate are both improved. The 100% successful docking rate achieved verifies the feasibility of the proposed localization method.


Sign in / Sign up

Export Citation Format

Share Document