B-splines for Purely Vision-based Localization and Mapping on Non-holonomic Ground Vehicles

Author(s):  
Kun Huang ◽  
Yifu Wang ◽  
Laurent Kneip
2019 ◽  
Vol 9 (7) ◽  
pp. 1428 ◽  
Author(s):  
Ran Wang ◽  
Xin Wang ◽  
MingMing Zhu ◽  
YinFu Lin

Autonomous underwater vehicles (AUVs) are widely used, but it is a tough challenge to guarantee the underwater location accuracy of AUVs. In this paper, a novel method is proposed to improve the accuracy of vision-based localization systems in feature-poor underwater environments. The traditional stereo visual simultaneous localization and mapping (SLAM) algorithm, which relies on the detection of tracking features, is used to estimate the position of the camera and establish a map of the environment. However, it is hard to find enough reliable point features in underwater environments and thus the performance of the algorithm is reduced. A stereo point and line SLAM (PL-SLAM) algorithm for localization, which utilizes point and line information simultaneously, was investigated in this study to resolve the problem. Experiments with an AR-marker (Augmented Reality-marker) were carried out to validate the accuracy and effect of the investigated algorithm.


2020 ◽  
Vol 69 (10) ◽  
pp. 10642-10655
Author(s):  
Dunjin Chen ◽  
Jian Weng ◽  
Feiran Huang ◽  
Jian Zhou ◽  
Yijun Mao ◽  
...  

2012 ◽  
Vol 22 ◽  
pp. 106-112
Author(s):  
Alfredo Toriz ◽  
Abraham Sánchez ◽  
Maria A. Osorio

This paper describes a simultaneous planning localization and mapping (SPLAM) methodology focussed on the global localization problem, where the robot explores the environment efficiently and also considers the requisites of the simultaneous localization and mapping algorithm. The method is based on the randomized incremental generation of a data structure called Sensor-based Random Tree, which represents a roadmap of the explored area with an associated safe region. A continuous localization procedure based on B-Splines features of the safe region is integrated in the scheme.


Author(s):  
Mahdi Haghshenas-Jaryani ◽  
Hakki Erhan Sevil ◽  
Liang Sun

Abstract This paper presents the concept of teaming up snake-robots, as unmanned ground vehicles (UGVs), and unmanned aerial vehicles (UAVs) for autonomous navigation and obstacle avoidance. Snake robots navigate in cluttered environments based on visual servoing of a co-robot UAV. It is assumed that snake-robots do not have any means to map the surrounding environment, detect obstacles, or self-localize, and these tasks are allocated to the UAV, which uses visual sensors to track the UGVs. The obtained images were used for the geo-localization and mapping the environment. Computer vision methods were utilized for the detection of obstacles, finding obstacle clusters, and then, mapping based on Probabilistic Threat Exposure Map (PTEM) construction. A path planner module determines the heading direction and velocity of the snake robot. A combined heading-velocity controller was used for the snake robot to follow the desired trajectories using the lateral undulatory gait. A series of simulations were carried out for analyzing the snake-robot’s maneuverability and proof-of-concept by navigating the snake robot in an environment with two obstacles based on the UAV visual servoing. The results showed the feasibility of the concept and effectiveness of the integrated system for navigation.


2015 ◽  
Vol 2015 (0) ◽  
pp. _2A2-M06_1-_2A2-M06_3
Author(s):  
Ankit Ravankar ◽  
Abhijeet Ravankar ◽  
Yukinori Kobayashi ◽  
Lv Jixin ◽  
Takanori Emaru

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5419 ◽  
Author(s):  
Xiao Liu ◽  
Lei Zhang ◽  
Shengran Qin ◽  
Daji Tian ◽  
Shihan Ouyang ◽  
...  

Reducing the cumulative error in the process of simultaneous localization and mapping (SLAM) has always been a hot issue. In this paper, in order to improve the localization and mapping accuracy of ground vehicles, we proposed a novel optimized lidar odometry and mapping method using ground plane constraints and SegMatch-based loop detection. We only used the lidar point cloud to estimate the pose between consecutive frames, without any other sensors, such as Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Firstly, the ground plane constraints were used to reduce matching errors. Then, based on more accurate lidar odometry obtained from lidar odometry and mapping (LOAM), SegMatch completed segmentation matching and loop detection to optimize the global pose. The neighborhood search was also used to accomplish the loop detection task in case of failure. Finally, the proposed method was evaluated and compared with the existing 3D lidar SLAM methods. Experiment results showed that the proposed method could realize low drift localization and dense 3D point cloud map construction.


2019 ◽  
Vol 38 (14) ◽  
pp. 1549-1559 ◽  
Author(s):  
Maxime Ferrera ◽  
Vincent Creuze ◽  
Julien Moras ◽  
Pauline Trouvé-Peloux

We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archeological site at a depth of 270 meters, and a second site at a depth of 380 meters. The data acquisition is performed using remotely operated vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor, and a computing unit, all embedded in a single enclosure. The sensors’ measurements are recorded synchronously on the computing unit and 17 sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a structure-from-motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/


Sign in / Sign up

Export Citation Format

Share Document