A fusion algorithm of visual odometry based on feature-based method and direct method

Author(s):  
Jinglun Feng ◽  
Chengjin Zhang ◽  
Bo Sun ◽  
Yong Song
Author(s):  
Jianke Zhu

Visual odometry is an important research problem for computer vision and robotics. In general, the feature-based visual odometry methods heavily rely on the accurate correspondences between local salient points, while the direct approaches could make full use of whole image and perform dense 3D reconstruction simultaneously. However, the direct visual odometry usually suffers from the drawback of getting stuck at local optimum especially with large displacement, which may lead to the inferior results. To tackle this critical problem, we propose a novel scheme for stereo odometry in this paper, which is able to improve the convergence with more accurate pose. The key of our approach is a dual Jacobian optimization that is fused into a multi-scale pyramid scheme. Moreover, we introduce a gradient-based feature representation, which enjoys the merit of being robust to illumination changes. Furthermore, a joint direct odometry approach is proposed to incorporate the information from the last frame and previous keyframes. We have conducted the experimental evaluation on the challenging KITTI odometry benchmark, whose promising results show that the proposed algorithm is very effective for stereo visual odometry.


2021 ◽  
Author(s):  
William Gates ◽  
Grafika Jati ◽  
Riskyana Dewi Intan P ◽  
Mahardhika Pratama ◽  
Wisnu Jatmiko

2020 ◽  
Vol 10 (4) ◽  
pp. 1467
Author(s):  
Chao Sheng ◽  
Shuguo Pan ◽  
Wang Gao ◽  
Yong Tan ◽  
Tao Zhao

Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). In this paper, Dynamic-DSO which is a semantic monocular direct visual odometry based on DSO (Direct Sparse Odometry) is proposed. The proposed system is completely implemented with the direct method, which is different from the most current dynamic systems combining the indirect method with deep learning. Firstly, convolutional neural networks (CNNs) are applied to the original RGB image to generate the pixel-wise semantic information of dynamic objects. Then, based on the semantic information of the dynamic objects, dynamic candidate points are filtered out in keyframes candidate points extraction; only static candidate points are reserved in the tracking and optimization module, to achieve accurate camera pose estimation in dynamic environments. The photometric error calculated by the projection points in dynamic region of subsequent frames are removed from the whole photometric error in pyramid motion tracking model. Finally, the sliding window optimization which neglects the photometric error calculated in the dynamic region of each keyframe is applied to obtain the precise camera pose. Experiments on the public TUM dynamic dataset and the modified Euroc dataset show that the positioning accuracy and robustness of the proposed Dynamic-DSO is significantly higher than the state-of-the-art direct method in dynamic environments, and the semi-dense cloud map constructed by Dynamic-DSO is clearer and more detailed.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Ting Lei ◽  
Xiao-Feng Liu ◽  
Guo-Ping Cai ◽  
Yun-Meng Liu ◽  
Pan Liu

This paper estimates the pose of a noncooperative space target utilizing a direct method of monocular visual simultaneous location and mapping (SLAM). A Large Scale Direct SLAM (LSD-SLAM) algorithm for pose estimation based on photometric residual of pixel intensities is provided to overcome the limitation of existing feature-based on-orbit pose estimation methods. Firstly, new sequence images of the on-orbit target are continuously inputted, and the pose of each current frame is calculated according to minimizing the photometric residual of pixel intensities. Secondly, frames are distinguished as keyframes or normal frames according to the pose relationship, and these frames are used to optimize the local map points. After that, the optimized local map points are added to the back-end map. Finally, the poses of keyframes are further enumerated and optimized in the back-end thread based on the map points and the photometric residual between the keyframes. Numerical simulations and experiments are carried out to prove the validity of the proposed algorithm, and the results elucidate the effectiveness of the algorithm in estimating the pose of the noncooperative target.


Author(s):  
Erliang Yao ◽  
Hexin Zhang ◽  
Haitao Song ◽  
Guoliang Zhang

Purpose To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement Unit (IMU) in this study. Design/methodology/approach The proposed VO incorporates the direct method with the indirect method to track the features and to optimize the camera pose. It initializes the positions of tracked pixels with the IMU information. Besides, the tracked pixels are refined by minimizing the photometric errors. Due to the small convergence radius of the indirect method, the dynamic pixels are rejected. Subsequently, the camera pose is optimized by minimizing the reprojection errors. The frames with little dynamic information are selected to create keyframes. Finally, the local bundle adjustment is performed to refine the poses of the keyframes and the positions of 3-D points. Findings The proposed VO approach is evaluated experimentally in dynamic environments with various motion types, suggesting that the proposed approach achieves more accurate and stable location than the conventional approach. Moreover, the proposed VO approach works well in the environments with the motion blur. Originality/value The proposed approach fuses the indirect method and the direct method with the IMU information, which improves the localization in dynamic environments significantly.


Author(s):  
Marek Kraft ◽  
Michał Nowicki ◽  
Rudi Penne ◽  
Adam Schmidt ◽  
Piotr Skrzypczyński

Abstract The problem of position and orientation estimation for an active vision sensor that moves with respect to the full six degrees of freedom is considered. The proposed approach is based on point features extracted from RGB-D data. This work focuses on efficient point feature extraction algorithms and on methods for the management of a set of features in a single RGB-D data frame. While the fast, RGB-D-based visual odometry system described in this paper builds upon our previous results as to the general architecture, the important novel elements introduced here are aimed at improving the precision and robustness of the motion estimate computed from the matching point features of two RGB-D frames. Moreover, we demonstrate that the visual odometry system can serve as the front-end for a pose-based simultaneous localization and mapping solution. The proposed solutions are tested on publicly available data sets to ensure that the results are scientifically verifiable. The experimental results demonstrate gains due to the improved feature extraction and management mechanisms, whereas the performance of the whole navigation system compares favorably to results known from the literature.


Sign in / Sign up

Export Citation Format

Share Document