Simultaneous localization and mapping of autonomous underwater vehicle using looking forward sonar

2012 ◽  
Vol 17 (1) ◽  
pp. 91-97 ◽  
Author(s):  
Wen-jing Zeng ◽  
Lei Wan ◽  
Tie-dong Zhang ◽  
Shu-ling Huang
2013 ◽  
Vol 427-429 ◽  
pp. 1670-1673 ◽  
Author(s):  
Hao Zhang ◽  
Bo He ◽  
Ning Luan

Sparse extended information filter-based simultaneous localization and mapping (SEIF-based SLAM) algorithm can reflect significant advantages in terms of computation time and storage memories. However, SEIF-SLAM is easily prone to overconfidence due to sparsification strategy. In this paper we will consider the time consumption and information loss of sparse operation, and get the optimal sparse time. In order to verify the feasibility of sparsification, a sea trial for autonomous underwater vehicle (AUV) C-Ranger was conducted in Tuandao Bay. The experimental results will show the improved algorithm is much more effective and accurate comparedwithothermethods.


2009 ◽  
Vol 43 (2) ◽  
pp. 33-47 ◽  
Author(s):  
Hunter C. Brown ◽  
Ayoung Kim ◽  
Ryan M. Eustice

AbstractThis article provides a general overview of the autonomous underwater vehicle (AUV) research thrusts being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL's research centers on improving AUV autonomy via algorithmic advancements in environmentally based perceptual feedback for real-time mapping, navigation, and control. Our three major research areas are (1) real-time visual simultaneous localization and mapping (SLAM), (2) cooperative multi-vehicle navigation, and (3) perception-driven control. Pursuant to these research objectives, PeRL has developed a new multi-AUV SLAM testbed based upon a modified Ocean-Server Iver2 AUV platform. PeRL upgraded the vehicles with additional navigation and perceptual sensors for underwater SLAM research. In this article, we detail our testbed development, provide an overview of our major research thrusts, and put into context how our modified AUV testbed enables experimental real-world validation of these algorithms.


2020 ◽  
Vol 8 (6) ◽  
pp. 437
Author(s):  
Francisco Bonin-Font ◽  
Antoni Burguera

State of the art approaches to Multi-robot localization and mapping still present multiple issues to be improved, offering a wide range of possibilities for researchers and technology. This paper presents a new algorithm for visual Multi-robot simultaneous localization and mapping, used to join, in a common reference system, several trajectories of different robots that participate simultaneously in a common mission. One of the main problems in centralized configurations, where the leader can receive multiple data from the rest of robots, is the limited communications bandwidth that delays the data transmission and can be overloaded quickly, restricting the reactive actions. This paper presents a new approach to Multi-robot visual graph Simultaneous Localization and Mapping (SLAM) that aims to perform a joined topological map, which evolves in different directions according to the different trajectories of the different robots. The main contributions of this new strategy are centered on: (a) reducing to hashes of small dimensions the visual data to be exchanged among all agents, diminishing, in consequence, the data delivery time, (b) running two different phases of SLAM, intra- and inter-session, with their respective loop-closing tasks, with a trajectory joining action in between, with high flexibility in their combination, (c) simplifying the complete SLAM process, in concept and implementation, and addressing it to correct the trajectory of several robots, initially and continuously estimated by means of a visual odometer, and (d) executing the process online, in order to assure a successful accomplishment of the mission, with the planned trajectories and at the planned points. Primary results included in this paper show a promising performance of the algorithm in visual datasets obtained in different points on the coast of the Balearic Islands, either by divers or by an Autonomous Underwater Vehicle (AUV) equipped with cameras.


2022 ◽  
Vol 15 ◽  
Author(s):  
Chensheng Cheng ◽  
Can Wang ◽  
Dianyu Yang ◽  
Weidong Liu ◽  
Feihu Zhang

SLAM (Simultaneous Localization And Mapping) plays a vital role in navigation tasks of AUV (Autonomous Underwater Vehicle). However, due to a vast amount of image sonar data and some acoustic equipment's inherent high latency, it is a considerable challenge to implement real-time underwater SLAM on a small AUV. This paper presents a filter based methodology for SLAM algorithms in underwater environments. First, a multi-beam forward looking sonar (MFLS) is utilized to extract environmental features. The acquired sonar image is then converted to sparse point cloud format through threshold segmentation and distance-constrained filtering to solve the calculation explosion issue caused by a large amount of original data. Second, based on the proposed method, the DVL, IMU, and sonar data are fused, the Rao-Blackwellized particle filter (RBPF)-based SLAM method is used to estimate AUV pose and generate an occupancy grid map. To verify the proposed algorithm, the underwater vehicle is equipped as an experimental platform to conduct field tasks in both the experimental pool and wild lake, respectively. Experiments illustrate that the proposed approach achieves better performance in both state estimation and suppressing divergence.


Sign in / Sign up

Export Citation Format

Share Document