scholarly journals Bridge Inspection Using Unmanned Aerial Vehicle Based on HG-SLAM: Hierarchical Graph-Based SLAM

2020 ◽  
Vol 12 (18) ◽  
pp. 3022
Author(s):  
Sungwook Jung ◽  
Duckyu Choi ◽  
Seungwon Song ◽  
Hyun Myung

With the increasing demand for autonomous systems in the field of inspection, the use of unmanned aerial vehicles (UAVs) to replace human labor is becoming more frequent. However, the Global Positioning System (GPS) signal is usually denied in environments near or under bridges, which makes the manual operation of a UAV difficult and unreliable in these areas. This paper addresses a novel hierarchical graph-based simultaneous localization and mapping (SLAM) method for fully autonomous bridge inspection using an aerial vehicle, as well as a technical method for UAV control for actually conducting bridge inspections. Due to the harsh environment involved and the corresponding limitations on GPS usage, a graph-based SLAM approach using a tilted 3D LiDAR (Light Detection and Ranging) and a monocular camera to localize the UAV and map the target bridge is proposed. Each visual-inertial state estimate and the corresponding LiDAR sweep are combined into a single subnode. These subnodes make up a “supernode” that consists of state estimations and accumulated scan data for robust and stable node generation in graph SLAM. The constraints are generated from LiDAR data using the normal distribution transform (NDT) and generalized iterative closest point (G-ICP) matching. The feasibility of the proposed method was verified on two different types of bridges: on the ground and offshore.

10.29007/zw9k ◽  
2020 ◽  
Author(s):  
Kazuhide Nakata ◽  
Kazuki Umemoto ◽  
Kenji Kaneko ◽  
Ryusuke Fujisawa

This study addresses the development of a robot for inspection of old bridges. By suspending the robot with a wire and controlling the wire length, the movement of the robot is realized. The robot mounts a high-definition camera and aims to detect cracks on the concrete surface of the bridge using this camera. An inspection method using an unmanned aerial vehicle (UAV) has been proposed. Compared to the method using an unmanned aerial vehicle, the wire suspended robot system has the advantage of insensitivity to wind and ability to carry heavy equipments, this makes it possible to install a high-definition camera and a cleaning function to find cracks that are difficult to detect due to dirt.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3928 ◽  
Author(s):  
Weisong Wen ◽  
Li-Ta Hsu ◽  
Guohao Zhang

Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, Light detection and ranging (LiDAR) can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based simultaneous localization and mapping (SLAM) demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization. The best performance is achieved in the sparse area with normal traffic and the worse performance is obtained in dense urban area with 3D positioning error (summation of horizontal and vertical) gradients of 0.024 m/s and 0.189 m/s, respectively. The analyzed results can be a comprehensive benchmark for evaluating the performance of standalone NDT-based graph SLAM in diverse scenarios which is significant for multi-sensor fusion of autonomous vehicle.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3245
Author(s):  
Tianyao Zhang ◽  
Xiaoguang Hu ◽  
Jin Xiao ◽  
Guofeng Zhang

What makes unmanned aerial vehicles (UAVs) intelligent is their capability of sensing and understanding new unknown environments. Some studies utilize computer vision algorithms like Visual Simultaneous Localization and Mapping (VSLAM) and Visual Odometry (VO) to sense the environment for pose estimation, obstacles avoidance and visual servoing. However, understanding the new environment (i.e., make the UAV recognize generic objects) is still an essential scientific problem that lacks a solution. Therefore, this paper takes a step to understand the items in an unknown environment. The aim of this research is to enable the UAV with basic understanding capability for a high-level UAV flock application in the future. Specially, firstly, the proposed understanding method combines machine learning and traditional algorithm to understand the unknown environment through RGB images; secondly, the You Only Look Once (YOLO) object detection system is integrated (based on TensorFlow) in a smartphone to perceive the position and category of 80 classes of objects in the images; thirdly, the method makes the UAV more intelligent and liberates the operator from labor; fourthly, detection accuracy and latency in working condition are quantitatively evaluated, and properties of generality (can be used in various platforms), transportability (easily deployed from one platform to another) and scalability (easily updated and maintained) for UAV flocks are qualitatively discussed. The experiments suggest that the method has enough accuracy to recognize various objects with high computational speed, and excellent properties of generality, transportability and scalability.


2019 ◽  
Vol 36 (7) ◽  
pp. 1212-1221
Author(s):  
Takahiro Ikeda ◽  
Satoshi Minamiyama ◽  
Shogo Yasui ◽  
Kenichi Ohara ◽  
Akihiko Ichikawa ◽  
...  

2019 ◽  
Vol 11 (23) ◽  
pp. 2827 ◽  
Author(s):  
Narcís Palomeras ◽  
Marc Carreras ◽  
Juan Andrade-Cetto

Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.


2017 ◽  
Vol 9 (4) ◽  
pp. 283-296 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Antoni Grau

Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.


2014 ◽  
Vol 67 (1) ◽  
Author(s):  
Norashikin M. Thamrin ◽  
Norhashim Mohd. Arshad ◽  
Ramli Adnan ◽  
Rosidah Sam ◽  
Noorfazdli Abd. Razak ◽  
...  

In Simultaneous Localization and Mapping (SLAM) technique, recognizing and marking the landmarks in the environment is very important. Therefore, in a commercial farm, rows of trees, borderline of rows as well as the trees and other features are mostly used by the researchers in realizing the automation process in this field. In this paper, the detection of the tree based on its diameter is focused. There are few techniques available in determining the size of the tree trunk inclusive of the laser scanning method as well as image-based measurements. However, those techniques require heavy computations and equipments which become constraints in a lightweight unmanned aerial vehicle implementation. Therefore, in this paper, the detection of an object by using a single and multiple infrared sensors on a non-stationary automated vehicle platform is discussed. The experiments were executed on different size of objects in order to investigate the effectiveness of this proposed method. This work is initially tested on the ground, based in the lab environment by using an omni directional vehicle which later will be adapted on a small-scale unmanned aerial vehicle implementation for tree diameter estimation in the agriculture farm.  In the current study, comparing multiple sensors with single sensor orientation showed that the average percentage of the pass rate in the pole recognition for the former is relatively more accurate than the latter with 93.2 percent and 74.2 percent, respectively. 


Author(s):  
Adhiti T. Raman ◽  
Venkat N. Krovi ◽  
Matthias J. A. Schmid

A new class of distributed, autonomous systems is emerging, capable of exploiting multimodal distributed and networked spatial and temporal data (at significantly larger scales). A renaissance autonomy engineer requires proficiency in both traditional engineering concepts as well as a systems engineering skillset for implementing the ensuing complex systems. In this paper, we describe goals, development and first offering of a scaffolded course: “AuE 893 Autonomy: Science and Systems” to begin addressing this goal. Geared towards graduate engineering students, with limited prior exposure, the course complements the concepts from traditional courses (on mobile-robotics) with experiential hands-on system-integration efforts (building on the F1tenth.org kits). The staged course structure initially builds upon open-source Robotics Operating System (ROS) tutorials on simulated systems (Gazebo/RViz) with networked communication; Hardware-in-the-loop realization (with a Turtlebot platform) then aids the exploration (and reinforcement) of autonomy concepts. The course culminates in a final-project comprising performance testing with student-team integrated scaled Autonomous Remote Control cars (based on the F1tenth.org parts-list). All three student teams were successful in navigating around a closed racecourse at speeds of 10–15 miles per hour, using Simultaneous Localization and Mapping (SLAM) for situational awareness and obstacle-avoidance. We conclude with discussion of lessons-learnt and opportunities for future improvement.


Sign in / Sign up

Export Citation Format

Share Document