Line Feature Based Extrinsic Calibration of LiDAR and Camera

Author(s):  
Jingjing Jiang ◽  
Peixin Xue ◽  
Shitao Chen ◽  
Ziyi Liu ◽  
Xuetao Zhang ◽  
...  
2021 ◽  
pp. 3939-3949
Author(s):  
Lihao Tao ◽  
Ling Pei ◽  
Tao Li ◽  
Danping Zou ◽  
Gabriele Ermacora ◽  
...  

Author(s):  
J. M. Li ◽  
C. R. Li ◽  
M Zhou ◽  
J. Hu ◽  
C. M. Yang

As the linear array sensor such as multispectral and hyperspectral sensor has great potential in disaster monitoring and geological survey, the quality of the image geometric rectification should be guaranteed. Different from the geometric rectification of airborne planar array images or multi linear array images, exterior orientation elements need to be determined for each scan line of single linear array images. Internal distortion persists after applying GPS/IMU data directly to geometrical rectification. Straight lines may be curving and jagged. Straight line feature -based geometrical rectification algorithm was applied to solve this problem, whereby the exterior orientation elements were fitted by piecewise polynomial and evaluated with the straight line feature as constraint. However, atmospheric turbulence during the flight is unstable, equal piecewise can hardly provide good fitting, resulting in limited precision improvement of geometric rectification or, in a worse case, the iteration cannot converge. To solve this problem, drawing on dynamic programming ideas, unequal segmentation of line feature-based geometric rectification method is developed. The angle elements fitting error is minimized to determine the optimum boundary. Then the exterior orientation elements of each segment are fitted and evaluated with the straight line feature as constraint. The result indicates that the algorithm is effective in improving the precision of geometric rectification.


Author(s):  
J. M. Li ◽  
C. R. Li ◽  
M Zhou ◽  
J. Hu ◽  
C. M. Yang

As the linear array sensor such as multispectral and hyperspectral sensor has great potential in disaster monitoring and geological survey, the quality of the image geometric rectification should be guaranteed. Different from the geometric rectification of airborne planar array images or multi linear array images, exterior orientation elements need to be determined for each scan line of single linear array images. Internal distortion persists after applying GPS/IMU data directly to geometrical rectification. Straight lines may be curving and jagged. Straight line feature -based geometrical rectification algorithm was applied to solve this problem, whereby the exterior orientation elements were fitted by piecewise polynomial and evaluated with the straight line feature as constraint. However, atmospheric turbulence during the flight is unstable, equal piecewise can hardly provide good fitting, resulting in limited precision improvement of geometric rectification or, in a worse case, the iteration cannot converge. To solve this problem, drawing on dynamic programming ideas, unequal segmentation of line feature-based geometric rectification method is developed. The angle elements fitting error is minimized to determine the optimum boundary. Then the exterior orientation elements of each segment are fitted and evaluated with the straight line feature as constraint. The result indicates that the algorithm is effective in improving the precision of geometric rectification.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3559 ◽  
Author(s):  
Runzhi Wang ◽  
Kaichang Di ◽  
Wenhui Wan ◽  
Yongkang Wang

In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features—point and line segments—have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4604
Author(s):  
Fei Zhou ◽  
Limin Zhang ◽  
Chaolong Deng ◽  
Xinyue Fan

Traditional visual simultaneous localization and mapping (SLAM) systems rely on point features to estimate camera trajectories. However, feature-based systems are usually not robust in complex environments such as weak textures or obvious brightness changes. To solve this problem, we used more environmental structure information by introducing line segments features and designed a monocular visual SLAM system. This system combines points and line segments to effectively make up for the shortcomings of traditional positioning based only on point features. First, ORB algorithm based on local adaptive threshold was proposed. Subsequently, we not only optimized the extracted line features, but also added a screening step before the traditional descriptor matching to combine the point features matching results with the line features matching. Finally, the weighting idea was introduced. When constructing the optimized cost function, we allocated weights reasonably according to the richness and dispersion of features. Our evaluation on publicly available datasets demonstrated that the improved point-line feature method is competitive with the state-of-the-art methods. In addition, the trajectory graph significantly reduced drift and loss, which proves that our system increases the robustness of SLAM.


Author(s):  
S. Cheng ◽  
J. Yang ◽  
Z. Kang ◽  
P. H. Akwensi

<p><strong>Abstract.</strong> Since Global Navigation Satellite System may be unavailable in complex dynamic environments, visual SLAM systems have gained importance in robotics and its applications in recent years. The SLAM system based on point feature tracking shows strong robustness in many scenarios. Nevertheless, point features over images might be limited in quantity or not well distributed in low-textured scenes, which makes the behaviour of these approaches deteriorate. Compared with point features, line features as higher-dimensional features can provide more environmental information in complex scenes. As a matter of fact, line segments are usually sufficient in any human-made environment, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual SLAM systems. Therefore, this paper develops a scene-assisted point-line feature based visual SLAM method for autonomous flight in unknown indoor environments. First, ORB point features and Line Segment Detector (LSD)-based line features are extracted and matched respectively to build two types of projection models. Second, in order to effectively combine point and line features, a Convolutional Neural Network (CNN)-based model is pre-trained based on the scene characteristics for weighting their associated projection errors. Finally, camera motion is estimated through non-linear minimization of the weighted projection errors between the correspondent observed features and those projected from previous frames. To evaluate the performance of the proposed method, experiments were conducted on the public EuRoc dataset. Experimental results indicate that the proposed method outperforms the conventional point-line feature based visual SLAM method in localization accuracy, especially in low-textured scenes.</p>


Sign in / Sign up

Export Citation Format

Share Document