scholarly journals Real-Time Photometric Calibrated Monocular Direct Visual SLAM

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3604 ◽  
Author(s):  
Peixin Liu ◽  
Xianfeng Yuan ◽  
Chengjin Zhang ◽  
Yong Song ◽  
Chuanzheng Liu ◽  
...  

To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2106
Author(s):  
Ahmed Afifi ◽  
Chisato Takada ◽  
Yuichiro Yoshimura ◽  
Toshiya Nakaguchi

Minimally invasive surgery is widely used because of its tremendous benefits to the patient. However, there are some challenges that surgeons face in this type of surgery, the most important of which is the narrow field of view. Therefore, we propose an approach to expand the field of view for minimally invasive surgery to enhance surgeons’ experience. It combines multiple views in real-time to produce a dynamic expanded view. The proposed approach extends the monocular Oriented features from an accelerated segment test and Rotated Binary robust independent elementary features—Simultaneous Localization And Mapping (ORB-SLAM) to work with a multi-camera setup. The ORB-SLAM’s three parallel threads, namely tracking, mapping and loop closing, are performed for each camera and new threads are added to calculate the relative cameras’ pose and to construct the expanded view. A new algorithm for estimating the optimal inter-camera correspondence matrix from a set of corresponding 3D map points is presented. This optimal transformation is then used to produce the final view. The proposed approach was evaluated using both human models and in vivo data. The evaluation results of the proposed correspondence matrix estimation algorithm prove its ability to reduce the error and to produce an accurate transformation. The results also show that when other approaches fail, the proposed approach can produce an expanded view. In this work, a real-time dynamic field-of-view expansion approach that can work in all situations regardless of images’ overlap is proposed. It outperforms the previous approaches and can also work at 21 fps.


Author(s):  
N. Botteghi ◽  
B. Sirmacek ◽  
R. Schulte ◽  
M. Poel ◽  
C. Brune

Abstract. In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.


2014 ◽  
Vol 989-994 ◽  
pp. 2651-2654
Author(s):  
Yan Song ◽  
Bo He

In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.


Author(s):  
Tianmiao Wang ◽  
Chaolei Wang ◽  
Jianhong Liang ◽  
Yicheng Zhang

Purpose – The purpose of this paper is to present a Rao–Blackwellized particle filter (RBPF) approach for the visual simultaneous localization and mapping (SLAM) of small unmanned aerial vehicles (UAVs). Design/methodology/approach – Measurements from inertial measurement unit, barometric altimeter and monocular camera are fused to estimate the state of the vehicle while building a feature map. In this SLAM framework, an extra factorization method is proposed to partition the vehicle model into subspaces as the internal and external states. The internal state is estimated by an extended Kalman filter (EKF). A particle filter is employed for the external state estimation and parallel EKFs are for the map management. Findings – Simulation results indicate that the proposed approach is more stable and accurate than other existing marginalized particle filter-based SLAM algorithms. Experiments are also carried out to verify the effectiveness of this SLAM method by comparing with a referential global positioning system/inertial navigation system. Originality/value – The main contribution of this paper is the theoretical derivation and experimental application of the Rao–Blackwellized visual SLAM algorithm with vehicle model partition for small UAVs.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Panlong Gu ◽  
Fengyu Zhou ◽  
Dianguo Yu ◽  
Fang Wan ◽  
Wei Wang ◽  
...  

RGBD camera-based VSLAM (Visual Simultaneous Localization and Mapping) algorithm is usually applied to assist robots with real-time mapping. However, due to the limited measuring principle, accuracy, and distance of the equipped camera, this algorithm has typical disadvantages in the large and dynamic scenes with complex lightings, such as poor mapping accuracy, easy loss of robot position, and much cost on computing resources. Regarding these issues, this paper proposes a new method of 3D interior construction, which combines laser radar and an RGBD camera. Meanwhile, it is developed based on the Cartographer laser SLAM algorithm. The proposed method mainly takes two steps. The first step is to do the 3D reconstruction using the Cartographer algorithm and RGBD camera. It firstly applies the Cartographer algorithm to calculate the pose of the RGBD camera and to generate a submap. Then, a real-time 3D point cloud generated by using the RGBD camera is inserted into the submap, and the real-time interior construction is finished. The second step is to improve Cartographer loop-closure quality by the visual loop-closure for the sake of correcting the generated map. Compared with traditional methods in large-scale indoor scenes, the proposed algorithm in this paper shows higher precision, faster speed, and stronger robustness in such contexts, especially with complex light and dynamic objects, respectively.


2018 ◽  
Vol 2 (3) ◽  
pp. 151 ◽  
Author(s):  
Fethi Denim ◽  
Abdelkrim Nemra ◽  
Kahina Louadj ◽  
Abdelghani Boucheloukh ◽  
Mustapha Hamerlain ◽  
...  

Simultaneous localization and mapping (SLAM) is an essential capability for Unmanned Ground Vehicles (UGVs) travelling in unknown environments where globally accurate position data as GPS is not available. It is an important topic in the autonomous mobile robot research. This paper presents an Adaptive De-centralized Cooperative Vision-based SLAM solution for multiple UGVs, using the Adaptive Covariance Intersection (ACI) supported by a stereo vision sensor. In recent years, SLAM problem has gotten a specific consideration, the most commonly used approaches are the EKF-SLAM algorithm and the FAST-SLAM algorithm. The primary, which requires an accurate process and an observation model, suffers from the linearization problem. The last mentioned is not suitable for real-time implementation. In our work, the Visual SLAM (VSLAM) problem could be solved based on the Smooth Variable Structure Filter (SVSF) is proposed. This new filter is robust and stable to modelling uncertainties making it suitable for UGV localization and mapping problem. This new strategy retains the near optimal performance of the SVSF when applied to an uncertain system, it has the added benefit of presenting a considerable improvement in the robustness of the estimation process. All UGVs will add data features sorted by the ACI that estimate position on the global map. This solution gives, as a result, a large reliable map constructed by a group of UGVs plotted on it. This paper presents a Cooperative SVSF-VSLAM algorithm that contributes to solve the Adaptive Cooperative Vision SLAM problem for multiple UGVs. The algorithm was implemented on three mobile robots Pioneer 3-AT, using stereo vision sensors. Simulation results show eciency and give an advantage to our proposed algorithm, compared to the Cooperative EKF-VSLAM algorithm mainly concerning the noise quality.  This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tianji Ma ◽  
Nanyang Bai ◽  
Wentao Shi ◽  
Xi Wu ◽  
Lutao Wang ◽  
...  

In the automatic navigation robot field, robotic autonomous positioning is one of the most difficult challenges. Simultaneous localization and mapping (SLAM) technology can incrementally construct a map of the robot’s moving path in an unknown environment while estimating the position of the robot in the map, providing an effective solution for robots to fully navigate autonomously. The camera can obtain corresponding two-dimensional digital images from the real three-dimensional world. These images contain very rich colour, texture information, and highly recognizable features, which provide indispensable information for robots to understand and recognize the environment based on the ability to autonomously explore the unknown environment. Therefore, more and more researchers use cameras to solve SLAM problems, also known as visual SLAM. Visual SLAM needs to process a large number of image data collected by the camera, which has high performance requirements for computing hardware, and thus, its application on embedded mobile platforms is greatly limited. This paper presents a parallelization method based on embedded hardware equipped with embedded GPU. Use CUDA, a parallel computing platform, to accelerate the visual front-end processing of the visual SLAM algorithm. Extensive experiments are done to verify the effectiveness of the method. The results show that the presented method effectively improves the operating efficiency of the visual SLAM algorithm and ensures the original accuracy of the algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 705
Author(s):  
Yi Zhang ◽  
Fei Huang

Simultaneous Localization and Mapping (SLAM) technology is one of the best methods for fast 3D reconstruction and mapping. However, the accuracy of SLAM is not always high enough, which is currently the subject of much research interest. Panoramic vision can provide us with a wide range of angles of view, many feature points, and rich information. The panoramic multi-view cross-imaging feature can be used to realize instantaneous omnidirectional spatial information acquisition and improve the positioning accuracy of SLAM. In this study, we investigated panoramic visual SLAM positioning technology, including three core research points: (1) the spherical imaging model; (2) spherical image feature extraction and matching methods, including the Spherical Oriented FAST and Rotated BRIEF (SPHORB) and ternary scale-invariant feature transform (SIFT) algorithms; and (3) the panoramic visual SLAM algorithm. The experimental results show that the method of panoramic visual SLAM can improve the robustness and accuracy of a SLAM system.


2017 ◽  
Vol 14 (02) ◽  
pp. 1750007 ◽  
Author(s):  
Chen-Chien Hsu ◽  
Wei-Yen Wang ◽  
Tung-Yuan Lin ◽  
Yin-Tien Wang ◽  
Teng-Wei Huang

FastSLAM, such as FastSLAM 1.0 and FastSLAM 2.0, is a popular algorithm to solve the simultaneous localization and mapping (SLAM) problem for mobile robots. In real environments, however, the execution speed by FastSLAM would be too slow to achieve the objective of real-time design with a satisfactory accuracy because of excessive comparisons of the measurement with all the existing landmarks in particles, particularly when the number of landmarks is drastically increased. In this paper, an enhanced SLAM (ESLAM) is proposed, which uses not only odometer information but also sensor measurements to estimate the robot’s pose in the prediction step. Landmark information that has the maximum likelihood is then used to update the robot’s pose before updating the landmarks’ location. Compared to existing FastSLAM algorithms, the proposed ESLAM algorithm has a better performance in terms of computation efficiency as well as localization and mapping accuracy as demonstrated in the illustrated examples.


10.5772/7237 ◽  
2009 ◽  
Vol 6 (3) ◽  
pp. 21 ◽  
Author(s):  
Fernando Auat Cheein ◽  
Gustavo Scaglia ◽  
Fernando di Sciasio ◽  
Ricardo Carelli

This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping) algorithm based on an Extended Kalman Filter (EKF). This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.


Sign in / Sign up

Export Citation Format

Share Document