scholarly journals Robust autonomous flight in cluttered environment using a depth sensor

2020 ◽  
Vol 12 ◽  
pp. 175682932092452
Author(s):  
Liang Lu ◽  
Alexander Yunda ◽  
Adrian Carrio ◽  
Pascual Campoy

This paper presents a novel collision-free navigation system for the unmanned aerial vehicle based on point clouds that outperform compared to baseline methods, enabling high-speed flights in cluttered environments, such as forests or many indoor industrial plants. The algorithm takes the point cloud information from physical sensors (e.g. lidar, depth camera) and then converts it to an occupied map using Voxblox, which is then used by a rapid-exploring random tree to generate finite path candidates. A modified Covariant Hamiltonian Optimization for Motion Planning objective function is used to select the best candidate and update it. Finally, the best candidate trajectory is generated and sent to a Model Predictive Control controller. The proposed navigation strategy is evaluated in four different simulation environments; the results show that the proposed method has a better success rate and a shorter goal-reaching distance than the baseline method.

Author(s):  
N. F. Mukhtar ◽  
S. Azri ◽  
U. Ujang ◽  
M. G. Cuétara ◽  
G. M. Retortillo ◽  
...  

Abstract. In recent years, 3D model for indoor spaces have become highly demanded in the development of technology. Many approaches to 3D visualisation and modelling especially for indoor environment was developed such as laser scanner, photogrammetry, computer vision, image and many more. However, most of the technique relies on the experience of the operator to get the best result. Besides that, the equipment is quite expensive and time-consuming in terms of processing. This paper focuses on the data acquisition and visualisation of a 3D model for an indoor space by using a depth sensor. In this study, EyesMap3D Pro by Ecapture is used to collect 3D data of the indoor spaces. The EyesMap3D Pro depth sensor is able to generate 3D point clouds in high speed and high mobility due to the portability and light weight of the device. However, more attention must be paid on data acquisition, data processing, visualizing, and evaluation of the depth sensor data. Hence, this paper will discuss the data processing from extracting features from 3D point clouds to 3D indoor models. Afterwards, the evaluation on the 3D models is made to ensure the suitability in indoor model and indoor mapping application. In this study, the 3D model was exported to 3D GIS-ready format for displaying and storing more information of the indoor spaces.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


2021 ◽  
Vol 10 (6) ◽  
pp. 367
Author(s):  
Simoni Alexiou ◽  
Georgios Deligiannakis ◽  
Aggelos Pallikarakis ◽  
Ioannis Papanikolaou ◽  
Emmanouil Psomiadis ◽  
...  

Analysis of two small semi-mountainous catchments in central Evia island, Greece, highlights the advantages of Unmanned Aerial Vehicle (UAV) and Terrestrial Laser Scanning (TLS) based change detection methods. We use point clouds derived by both methods in two sites (S1 & S2), to analyse the effects of a recent wildfire on soil erosion. Results indicate that topsoil’s movements in the order of a few centimetres, occurring within a few months, can be estimated. Erosion at S2 is precisely delineated by both methods, yielding a mean value of 1.5 cm within four months. At S1, UAV-derived point clouds’ comparison quantifies annual soil erosion more accurately, showing a maximum annual erosion rate of 48 cm. UAV-derived point clouds appear to be more accurate for channel erosion display and measurement, while the slope wash is more precisely estimated using TLS. Analysis of Point Cloud time series is a reliable and fast process for soil erosion assessment, especially in rapidly changing environments with difficult access for direct measurement methods. This study will contribute to proper georesource management by defining the best-suited methodology for soil erosion assessment after a wildfire in Mediterranean environments.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Author(s):  
M. Leslar

Using unmanned aerial vehicles (UAV) for the purposes of conducting high-accuracy aerial surveying has become a hot topic over the last year. One of the most promising means of conducting such a survey involves integrating a high-resolution non-metric digital camera with the UAV and using the principals of digital photogrammetry to produce high-density colorized point clouds. Through the use of stereo imagery, precise and accurate horizontal positioning information can be produced without the need for integration with any type of inertial navigation system (INS). Of course, some form of ground control is needed to achieve this result. Terrestrial LiDAR, either static or mobile, provides the solution. Points extracted from Terrestrial LiDAR can be used as control in the digital photogrammetry solution required by the UAV. In return, the UAV is an affordable solution for filling in the shadows and occlusions typically experienced by Terrestrial LiDAR. In this paper, the accuracies of points derived from a commercially available UAV solution will be examined and compared to the accuracies achievable by a commercially available LIDAR solution. It was found that the LiDAR system produced a point cloud that was twice as accurate as the point cloud produced by the UAV’s photogrammetric solution. Both solutions gave results within a few centimetres of the control field. In addition the about of planar dispersion on the vertical wall surfaces in the UAV point cloud was found to be multiple times greater than that from the horizontal ground based UAV points or the LiDAR data.


Robotica ◽  
2009 ◽  
Vol 28 (5) ◽  
pp. 637-648 ◽  
Author(s):  
Hamid Teimoori ◽  
Andrey V. Savkin

SUMMARYThe problem of wheeled mobile robot (WMR) navigation toward an unknown target in a cluttered environment has been considered. The biologically inspired navigation algorithm is the equiangular navigation guidance (ENG) law combined with a local obstacle avoidance technique. The collision avoidance technique uses a system of active sensors which provides the necessary information about obstacles in the vicinity of the robot. In order for the robot to avoid collision and bypass the enroute obstacles, the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle is kept constant. The performance of the navigation strategy is confirmed with computer simulations and experiments with ActivMedia Pioneer 3-DX wheeled robot.


2021 ◽  
Vol 55 (4) ◽  
pp. 88-98
Author(s):  
Maria Inês Pereira ◽  
Pedro Nuno Leite ◽  
Andry Maykol Pinto

Abstract The maritime industry has been following the paradigm shift toward the automation of typically intelligent procedures, with research regarding autonomous surface vehicles (ASVs) having seen an upward trend in recent years. However, this type of vehicle cannot be employed on a full scale until a few challenges are solved. For example, the docking process of an ASV is still a demanding task that currently requires human intervention. This research work proposes a volumetric convolutional neural network (vCNN) for the detection of docking structures from 3-D data, developed according to a balance between precision and speed. Another contribution of this article is a set of synthetically generated data regarding the context of docking structures. The dataset is composed of LiDAR point clouds, stereo images, GPS, and Inertial Measurement Unit (IMU) information. Several robustness tests carried out with different levels of Gaussian noise demonstrated an average accuracy of 93.34% and a deviation of 5.46% for the worst case. Furthermore, the system was fine-tuned and evaluated in a real commercial harbor, achieving an accuracy of over 96%. The developed classifier is able to detect different types of structures and works faster than other state-of-the-art methods that establish their performance in real environments.


2019 ◽  
Vol 9 (9) ◽  
pp. 1916 ◽  
Author(s):  
Tiantian Huang ◽  
Hui Jiang ◽  
Zhuoyang Zou ◽  
Lingyun Ye ◽  
Kaichen Song

In order to solve the problems of filtering divergence and low accuracy in Kalman filter (KF) applications in a high-speed unmanned aerial vehicle (UAV), this paper proposed a new method of integrated robust adaptive Kalman filter: strong adaptive Kalman filter (SAKF). The simulation of two high-dynamic conditions and a practical experiment were designed to verify the new multi-sensor data fusion algorithm. Then the performance of the Sage–Husa adaptive Kalman filter (SHAKF), strong tracking filter (STF), H∞ filter and SAKF were compared. The results of the simulation and practical experiments show that the SAKF can automatically select its filtering process under different conditions, according to an anomaly criterion. SAKF combines the advantages of SHAKF, H∞ filter and STF, and has the characteristics of high accuracy, robustness and good tracking skill. The research has proved that SAKF is more appropriate in high-speed UAV navigation than single filter algorithms.


Author(s):  
Amirhossein Fereidountabar ◽  
Gian Carlo Cardarilli ◽  
Luca Di Nunzio ◽  
Rocco Fazzolari

Sign in / Sign up

Export Citation Format

Share Document