scholarly journals Visual navigation of an autonomous underwater vehicle based on the global search of image correspondences

2018 ◽  
Vol 42 (3) ◽  
pp. 457-467 ◽  
Author(s):  
A. N. Kamaev ◽  
D. A. Karmanov

A task of autonomous underwater vehicle (AUV) navigation is considered in the paper. The images obtained from an onboard stereo camera are used to build point clouds attached to a particular AUV position. Quantized SIFT descriptors of points are stored in a metric tree to organize an effective search procedure using a best bin first approach. Correspondences for a new point cloud are searched in a compact group of point clouds that have the largest number of similar descriptors stored in the tree. The new point cloud can be positioned relative to the other clouds without any prior information about the AUV position and uncertainty of this position. This approach increases the reliability of the AUV navigation system and makes it insensitive to data losses, textureless seafloor regions and long passes without trajectory intersections. Several algorithms are described in the paper: an algorithm of point clouds computation, an algorithm for establishing point clouds correspondence, and an algorithm of building groups of potentially linked point clouds to speedup the global search of correspondences. The general navigation algorithm consisting of three parallel subroutines: image adding, search tree updating, and global optimization is also presented. The proposed navigation system is tested on real and synthetic data. Tests on real data showed that the trajectory can be built even for an image sequence with 60% data losses with successive images that have either small or zero overlap. Tests on synthetic data showed that the constructed trajectory is close to the true one even for long missions. The average speed of image processing by the proposed navigation system is about 3 frames per second with  a middle-price desktop CPU.

2020 ◽  
Vol 12 (8) ◽  
pp. 1240 ◽  
Author(s):  
Xabier Blanch ◽  
Antonio Abellan ◽  
Marta Guinau

The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Ricardo Pérez-Alcocer ◽  
L. Abril Torres-Méndez ◽  
Ernesto Olguín-Díaz ◽  
A. Alejandro Maldonado-Ramírez

This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem, we have adapted thelαβcolor space to identify features of interest in underwater images even in extreme visibility conditions. To guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our visual navigation system in real environments showing the feasibility of our approach.


2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.


Author(s):  
T. O. Chan ◽  
D. D. Lichti

Lamp poles are one of the most abundant highway and community components in modern cities. Their supporting parts are primarily tapered octagonal cones specifically designed for wind resistance. The geometry and the positions of the lamp poles are important information for various applications. For example, they are important to monitoring deformation of aged lamp poles, maintaining an efficient highway GIS system, and also facilitating possible feature-based calibration of mobile LiDAR systems. In this paper, we present a novel geometric model for octagonal lamp poles. The model consists of seven parameters in which a rotation about the z-axis is included, and points are constrained by the trigonometric property of 2D octagons after applying the rotations. For the geometric fitting of the lamp pole point cloud captured by a terrestrial LiDAR, accurate initial parameter values are essential. They can be estimated by first fitting the points to a circular cone model and this is followed by some basic point cloud processing techniques. The model was verified by fitting both simulated and real data. The real data includes several lamp pole point clouds captured by: (1) Faro Focus 3D and (2) Velodyne HDL-32E. The fitting results using the proposed model are promising, and up to 2.9 mm improvement in fitting accuracy was realized for the real lamp pole point clouds compared to using the conventional circular cone model. The overall result suggests that the proposed model is appropriate and rigorous.


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


Sign in / Sign up

Export Citation Format

Share Document