An Improved Monocular PL-SlAM Method with Point-Line Feature Fusion under Low-Texture Environment

2021 ◽  
Author(s):  
Gaochao Yang ◽  
Qing Wang ◽  
Pengfei Liu ◽  
Huan Zhang
2018 ◽  
Vol 55 (2) ◽  
pp. 021501 ◽  
Author(s):  
袁梦 Yuan Meng ◽  
李艾华 Li Aihua ◽  
郑勇 Zheng Yong ◽  
崔智高 Cui Zhigao ◽  
鲍振强 Bao Zhengqiang

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1196
Author(s):  
Gang Li ◽  
Yawen Zeng ◽  
Huilan Huang ◽  
Shaojian Song ◽  
Bin Liu ◽  
...  

The traditional simultaneous localization and mapping (SLAM) system uses static points of the environment as features for real-time localization and mapping. When there are few available point features, the system is difficult to implement. A feasible solution is to introduce line features. In complex scenarios containing rich line segments, the description of line segments is not strongly differentiated, which can lead to incorrect association of line segment data, thus introducing errors into the system and aggravating the cumulative error of the system. To address this problem, a point-line stereo visual SLAM system incorporating semantic invariants is proposed in this paper. This system improves the accuracy of line feature matching by fusing line features with image semantic invariant information. When defining the error function, the semantic invariant is fused with the reprojection error function, and the semantic constraint is applied to reduce the cumulative error of the poses in the long-term tracking process. Experiments on the Office sequence of the TartanAir dataset and the KITTI dataset show that this system improves the matching accuracy of line features and suppresses the cumulative error of the SLAM system to some extent, and the mean relative pose error (RPE) is 1.38 and 0.0593 m, respectively.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3559 ◽  
Author(s):  
Runzhi Wang ◽  
Kaichang Di ◽  
Wenhui Wan ◽  
Yongkang Wang

In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features—point and line segments—have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4604
Author(s):  
Fei Zhou ◽  
Limin Zhang ◽  
Chaolong Deng ◽  
Xinyue Fan

Traditional visual simultaneous localization and mapping (SLAM) systems rely on point features to estimate camera trajectories. However, feature-based systems are usually not robust in complex environments such as weak textures or obvious brightness changes. To solve this problem, we used more environmental structure information by introducing line segments features and designed a monocular visual SLAM system. This system combines points and line segments to effectively make up for the shortcomings of traditional positioning based only on point features. First, ORB algorithm based on local adaptive threshold was proposed. Subsequently, we not only optimized the extracted line features, but also added a screening step before the traditional descriptor matching to combine the point features matching results with the line features matching. Finally, the weighting idea was introduced. When constructing the optimized cost function, we allocated weights reasonably according to the richness and dispersion of features. Our evaluation on publicly available datasets demonstrated that the improved point-line feature method is competitive with the state-of-the-art methods. In addition, the trajectory graph significantly reduced drift and loss, which proves that our system increases the robustness of SLAM.


Drones ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 23
Author(s):  
Tong Zhang ◽  
Chunjiang Liu ◽  
Jiaqi Li ◽  
Minghui Pang ◽  
Mingang Wang

In view of traditional point-line feature visual inertial simultaneous localization and mapping (SLAM) system, which has weak performance in accuracy so that it cannot be processed in real time under the condition of weak indoor texture and light and shade change, this paper proposes an inertial SLAM method based on point-line vision for indoor weak texture and illumination. Firstly, based on Bilateral Filtering, we apply the Speeded Up Robust Features (SURF) point feature extraction and Fast Nearest neighbor (FLANN) algorithms to improve the robustness of point feature extraction result. Secondly, we establish a minimum density threshold and length suppression parameter selection strategy of line feature, and take the geometric constraint line feature matching into consideration to improve the efficiency of processing line feature. And the parameters and biases of visual inertia are initialized based on maximum posterior estimation method. Finally, the simulation experiments are compared with the traditional tightly-coupled monocular visual–inertial odometry using point and line features (PL-VIO) algorithm. The simulation results demonstrate that the proposed an inertial SLAM method based on point-line vision for indoor weak texture and illumination can be effectively operated in real time, and its positioning accuracy is 22% higher on average and 40% higher in the scenario that illumination changes and blurred image.


Author(s):  
S. Cheng ◽  
J. Yang ◽  
Z. Kang ◽  
P. H. Akwensi

<p><strong>Abstract.</strong> Since Global Navigation Satellite System may be unavailable in complex dynamic environments, visual SLAM systems have gained importance in robotics and its applications in recent years. The SLAM system based on point feature tracking shows strong robustness in many scenarios. Nevertheless, point features over images might be limited in quantity or not well distributed in low-textured scenes, which makes the behaviour of these approaches deteriorate. Compared with point features, line features as higher-dimensional features can provide more environmental information in complex scenes. As a matter of fact, line segments are usually sufficient in any human-made environment, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual SLAM systems. Therefore, this paper develops a scene-assisted point-line feature based visual SLAM method for autonomous flight in unknown indoor environments. First, ORB point features and Line Segment Detector (LSD)-based line features are extracted and matched respectively to build two types of projection models. Second, in order to effectively combine point and line features, a Convolutional Neural Network (CNN)-based model is pre-trained based on the scene characteristics for weighting their associated projection errors. Finally, camera motion is estimated through non-linear minimization of the weighted projection errors between the correspondent observed features and those projected from previous frames. To evaluate the performance of the proposed method, experiments were conducted on the public EuRoc dataset. Experimental results indicate that the proposed method outperforms the conventional point-line feature based visual SLAM method in localization accuracy, especially in low-textured scenes.</p>


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Dan Li ◽  
Shuang Liu ◽  
Weilai Xiang ◽  
Qiwei Tan ◽  
Kaicheng Yuan ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document