Suave: Swarm underwater autonomous vehicle localization

Author(s):  
Jun Liu ◽  
Zhaohui Wang ◽  
Zheng Peng ◽  
Jun-Hong Cui ◽  
Lance Fiondella
2019 ◽  
Vol 8 (6) ◽  
pp. 288 ◽  
Author(s):  
Kelvin Wong ◽  
Ehsan Javanmardi ◽  
Mahdi Javanmardi ◽  
Shunsuke Kamijo

Accurately and precisely knowing the location of the vehicle is a critical requirement for safe and successful autonomous driving. Recent studies suggest that error for map-based localization methods are tightly coupled with the surrounding environment. Considering this relationship, it is therefore possible to estimate localization error by quantifying the representation and layout of real-world phenomena. To date, existing work on estimating localization error have been limited to using self-collected 3D point cloud maps. This paper investigates the use of pre-existing 2D geographic information datasets as a proxy to estimate autonomous vehicle localization error. Seven map evaluation factors were defined for 2D geographic information in a vector format, and random forest regression was used to estimate localization error for five experiment paths in Shinjuku, Tokyo. In the best model, the results show that it is possible to estimate autonomous vehicle localization error with 69.8% of predictions within 2.5 cm and 87.4% within 5 cm.


2017 ◽  
Vol 36 (3) ◽  
pp. 292-319 ◽  
Author(s):  
Ryan W Wolcott ◽  
Ryan M Eustice

This paper reports on a fast multiresolution scan matcher for local vehicle localization of self-driving cars. State-of-the-art approaches to vehicle localization rely on observing road surface reflectivity with a 3D light detection and ranging (LIDAR) scanner to achieve centimeter-level accuracy. However, these approaches can often fail when faced with adverse weather conditions that obscure the view of the road paint (e.g. puddles and snowdrifts), poor road surface texture, or when road appearance degrades over time. We present a generic probabilistic method for localizing an autonomous vehicle equipped with a three-dimensional (3D) LIDAR scanner. This proposed algorithm models the world as a mixture of several Gaussians, characterizing the [Formula: see text]-height and reflectivity distribution of the environment—which we rasterize to facilitate fast and exact multiresolution inference. Results are shown on a collection of datasets totaling over 500 km of road data covering highway, rural, residential, and urban roadways, in which we demonstrate our method to be robust through heavy snowfall and roadway repavements.


Author(s):  
Syed Zeeshan Ahmed ◽  
Vincensius Billy Saputra ◽  
Saurab Verma ◽  
Kun Zhang ◽  
Albertus Hendrawan Adiwahono

Sensors ◽  
2015 ◽  
Vol 15 (8) ◽  
pp. 20779-20798 ◽  
Author(s):  
Byung-Hyun Lee ◽  
Jong-Hwa Song ◽  
Jun-Hyuck Im ◽  
Sung-Hyuck Im ◽  
Moon-Beom Heo ◽  
...  

2021 ◽  
Vol 13 (3) ◽  
pp. 506
Author(s):  
Xiaohu Lin ◽  
Fuhong Wang ◽  
Bisheng Yang ◽  
Wanwei Zhang

Accurate vehicle ego-localization is key for autonomous vehicles to complete high-level navigation tasks. The state-of-the-art localization methods adopt visual and light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) to estimate the position of the vehicle. However, both of them may suffer from error accumulation due to long-term running without loop optimization or prior constraints. Actually, the vehicle cannot always return to the revisited location, which will cause errors to accumulate in Global Navigation Satellite System (GNSS)-challenged environments. To solve this problem, we proposed a novel localization method with prior dense visual point cloud map constraints generated by a stereo camera. Firstly, the semi-global-block-matching (SGBM) algorithm is adopted to estimate the visual point cloud of each frame and stereo visual odometry is used to provide the initial position for the current visual point cloud. Secondly, multiple filtering and adaptive prior map segmentation are performed on the prior dense visual point cloud map for fast matching and localization. Then, the current visual point cloud is matched with the candidate sub-map by normal distribution transformation (NDT). Finally, the matching result is used to update pose prediction based on the last frame for accurate localization. Comprehensive experiments were undertaken to validate the proposed method, showing that the root mean square errors (RMSEs) of translation and rotation are less than 5.59 m and 0.08°, respectively.


Sign in / Sign up

Export Citation Format

Share Document