3D LiDAR-based Localization of Autonomous Mobile Robot based on 3D Point Cloud Generated by Structure from Motion using Aerial photos

Author(s):  
Han XI ◽  
Hisashi DATE
2018 ◽  
Vol 1087 ◽  
pp. 062031
Author(s):  
Qi Hu ◽  
Jianxin Luo ◽  
Guyu Hu ◽  
Weiwei Duan ◽  
Hui Zhou

2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


2021 ◽  
pp. 027836492110067
Author(s):  
Jens Behley ◽  
Martin Garbade ◽  
Andres Milioto ◽  
Jan Quenzel ◽  
Sven Behnke ◽  
...  

A holistic semantic scene understanding exploiting all available sensor modalities is a core capability to master self-driving in complex everyday traffic. To this end, we present the SemanticKITTI dataset that provides point-wise semantic annotations of Velodyne HDL-64E point clouds of the KITTI Odometry Benchmark. Together with the data, we also published three benchmark tasks for semantic scene understanding covering different aspects of semantic scene understanding: (1) semantic segmentation for point-wise classification using single or multiple point clouds as input; (2) semantic scene completion for predictive reasoning on the semantics and occluded regions; and (3) panoptic segmentation combining point-wise classification and assigning individual instance identities to separate objects of the same class. In this article, we provide details on our dataset showing an unprecedented number of fully annotated point cloud sequences, more information on our labeling process to efficiently annotate such a vast amount of point clouds, and lessons learned in this process. The dataset and resources are available at http://www.semantic-kitti.org .


2015 ◽  
Vol 27 (4) ◽  
pp. 356-364 ◽  
Author(s):  
Masatoshi Nomatsu ◽  
◽  
Youhei Suganuma ◽  
Yosuke Yui ◽  
Yutaka Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/05.jpg"" width=""200"" /> Developed autonomous mobile robot</div> In describing real-world self-localization and target-search methods, this paper discusses a mobile robot developed to verify a method proposed in Tsukuba Challenge 2014. The Tsukaba Challenge course includes promenades and parks containing ordinary pedestrians and bicyclists that require the robot to move toward a goal while avoiding the moving objects around it. Common self-localization methods often include 2D laser range finders (LRFs), but such LRFs do not always capture enough data for localization if, for example, the scanned plane has few landmarks. To solve this problem, we used a three-dimensional (3D) LRF for self-localization. The 3D LRF captures more data than the 2D type, resulting in more robust localization. Robots that provide practical services in real life must, among other functions, recognize a target and serve it autonomously. To enable robots to do so, this paper describes a method for searching for a target by using a cluster point cloud from the 3D LRF together with image processing of colored images captured by cameras. In Tsukuba Challenge 2014, the robot we developed providing the proposed methods completed the course and found the targets, verifying the effectiveness of our proposals. </span>


Sign in / Sign up

Export Citation Format

Share Document