scholarly journals Efficient Stereo Visual Simultaneous Localization and Mapping for an Autonomous Unmanned Forklift in an Unstructured Warehouse

2020 ◽  
Vol 10 (2) ◽  
pp. 698 ◽  
Author(s):  
Feiren Wang ◽  
Enli Lü ◽  
Yu Wang ◽  
Guangjun Qiu ◽  
Huazhong Lu

The autonomous navigation of unmanned vehicles in GPS denied environments is an incredibly challenging task. Because cameras are low in price, obtain rich information, and passively sense the environment, vision based simultaneous localization and mapping (VSLAM) has great potential to solve this problem. In this paper, we propose a novel VSLAM framework based on a stereo camera. The proposed approach combines the direct and indirect method for the real-time localization of an autonomous forklift in a non-structured warehouse. Our proposed hybrid method uses photometric errors to perform image alignment for data association and pose estimation, extracts features from keyframes, and matches them to acquire the updated pose. By combining the efficiency of the direct method and the high accuracy of the indirect method, the approach achieves higher speed with comparable accuracy to a state-of-the-art method. Furthermore, the two step dynamic threshold feature extraction method significantly reduces the operating time. In addition, a motion model of the forklift is proposed to provide a more reasonable initial pose for direct image alignment based on photometric errors. The proposed algorithm is experimentally tested on a dataset constructed from a large scale warehouse with dynamic lighting and long corridors, and the results show that it can still successfully perform with high accuracy. Additionally, our method can operate in real time using limited computing resources.

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1511 ◽  
Author(s):  
Quanpan Liu ◽  
Zhengjie Wang ◽  
Huan Wang

In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which can estimate camera motion and structure of surrounding sparse scenes. In the initialization procedure, we align the pre-integrated IMU measurements and visual images and calibrate out the metric scale, initial velocity, gravity vector, and gyroscope bias by using multiple view geometry (MVG) theory based on the feature-based method. At the front-end, keyframes are tracked by feature-based method and used for back-end optimization and loop closure detection, while non-keyframes are utilized for fast-tracking by direct method. This strategy makes the system not only have the better real-time performance of direct method, but also have high accuracy and loop closing detection ability based on feature-based method. At the back-end, we propose a sliding window-based tightly-coupled optimization framework, which can get more accurate state estimation by minimizing the visual and IMU measurement errors. In order to limit the computational complexity, we adopt the marginalization strategy to fix the number of keyframes in the sliding window. Experimental evaluation on EuRoC dataset demonstrates the feasibility and superior real-time performance of SD-VIS. Compared with state-of-the-art SLAM systems, we can achieve a better balance between accuracy and speed.


Author(s):  
Noura Ayadi ◽  
Nabil Derbel ◽  
Nicolas Morette ◽  
Cyril Novales ◽  
Gérard Poisson

Abstract In recent years, autonomous navigation for mobile robots has been considered a highly active research field. Within this context, we are interested to apply the Simultaneous Localization And Mapping (SLAM) approach for a wheeled mobile robot. The Extended Kalman Filter has been chosen to perform the SLAM algorithm. In this work, we explicit all steps of the approach. Performances of the developed algorithm have been assessed through simulation in the case of a small scale map. Then, we present several experiments on a real robot that are proceeded in order to exploit a programmed SLAM unit and to generate the navigation map. Based on experimental results, simulation of the SLAM method in the case of a large scale map is then realized. Obtained results are exploited in order to evaluate and compare the algorithm’s consistency and robustness for both cases.


2018 ◽  
Vol 8 (12) ◽  
pp. 2432 ◽  
Author(s):  
Jingchuan Wang ◽  
Ming Zhao ◽  
Weidong Chen

In large-scale and sparse scenes, such as farmland, orchards, mines, and substations, 3D simultaneous localization and mapping are challenging matters that need to address issues such as maintaining reliable data association for scarce environmental information and reducing the computational complexity of global optimization for large-scale scenes. To solve these problems, a real-time incremental simultaneous localization and mapping algorithm called MIM_SLAM is proposed in this paper. This algorithm is applied in mobile robots to build a map on a non-flat road with a 3D LiDAR sensor. MIM_SLAM’s main contribution is that multi-level ICP (Iterative Closest Point) matching is used to solve the data association problem, a Fisher information matrix is used to describe the uncertainty of the estimated pose, and these poses are optimized by the incremental optimization method, which can greatly reduce the computational cost. Then, a map with a high consistency will be established. The proposed algorithm has been evaluated in the real indoor and outdoor scenes as well as two substations and benchmarking dataset from KITTI with the characteristics of sparse and large-scale. Results show that the proposed algorithm has a high mapping accuracy and meets the real-time requirements.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2106
Author(s):  
Ahmed Afifi ◽  
Chisato Takada ◽  
Yuichiro Yoshimura ◽  
Toshiya Nakaguchi

Minimally invasive surgery is widely used because of its tremendous benefits to the patient. However, there are some challenges that surgeons face in this type of surgery, the most important of which is the narrow field of view. Therefore, we propose an approach to expand the field of view for minimally invasive surgery to enhance surgeons’ experience. It combines multiple views in real-time to produce a dynamic expanded view. The proposed approach extends the monocular Oriented features from an accelerated segment test and Rotated Binary robust independent elementary features—Simultaneous Localization And Mapping (ORB-SLAM) to work with a multi-camera setup. The ORB-SLAM’s three parallel threads, namely tracking, mapping and loop closing, are performed for each camera and new threads are added to calculate the relative cameras’ pose and to construct the expanded view. A new algorithm for estimating the optimal inter-camera correspondence matrix from a set of corresponding 3D map points is presented. This optimal transformation is then used to produce the final view. The proposed approach was evaluated using both human models and in vivo data. The evaluation results of the proposed correspondence matrix estimation algorithm prove its ability to reduce the error and to produce an accurate transformation. The results also show that when other approaches fail, the proposed approach can produce an expanded view. In this work, a real-time dynamic field-of-view expansion approach that can work in all situations regardless of images’ overlap is proposed. It outperforms the previous approaches and can also work at 21 fps.


Author(s):  
Abouzahir Mohamed ◽  
Elouardi Abdelhafid ◽  
Bouaziz Samir ◽  
Latif Rachid ◽  
Tajer Abdelouahed

The improved particle filter based simultaneous localization and mapping (SLAM) has been developed for many robotic applications. The main purpose of this article is to demonstrate that recent heterogeneous architectures can be used to implement the FastSLAM2.0 and can greatly help to design embedded systems based robot applications and autonomous navigation. The algorithm is studied, optimized and evaluated with a real dataset using different sensors data and a hardware in the loop (HIL) method. Authors have implemented the algorithm on a system based embedded applications. Results demonstrate that an optimized FastSLAM2.0 algorithm provides a consistent localization according to a reference. Such systems are suitable for real time SLAM applications.


2018 ◽  
Vol 30 (4) ◽  
pp. 591-597 ◽  
Author(s):  
Naoki Akai ◽  
Luis Yoichi Morales ◽  
Hiroshi Murase ◽  
◽  

This paper presents a teaching-playback navigation method that does not require a consistent map built using simultaneous localization and mapping (SLAM). Many open source projects related to autonomous navigation including SLAM have been made available recently; however, autonomous mobile robot navigation in large-scale environments is still difficult because it is difficult to build a consistent map. The navigation method presented in this paper uses several partial maps to represent an environment map. In other words, the complex mapping process is not necessary to begin autonomous navigation. In addition, the trajectory that the robot travels in the mapping phase can be directly used as a target path. As a result, teaching-playback autonomous navigation can be achieved without any off-line processes. We tested the navigation method using log data taken in the environment of the Tsukuba Challenge and the testing results show its performance. We provide source code for the navigation method, which includes modules required for autonomous navigation (https://github.com/NaokiAkai/AutoNavi).


Sign in / Sign up

Export Citation Format

Share Document