Computationally Efficient Mapping for a Mobile Robot with a Downsampling Method for the Iterative Closest Point

2018 ◽  
Vol 30 (1) ◽  
pp. 65-75 ◽  
Author(s):  
Shodai Deguchi ◽  
◽  
Genya Ishigami

This paper proposes a computationally efficient method for generating a three-dimensional environment map and estimating robot position. The proposed method assumes that a laser range finder mounted on a mobile robot can be used to provide a set of point cloud data of an environment around the mobile robot. The proposed method then extracts typical feature points from the point cloud data using an intensity image taken by the laser range finder. Subsequently, feature points extracted from two or more different sets of point cloud data are correlated by the iterative closest point algorithm that matches the points between the sets, creating a large map of the environment as well as estimating robot location in the map. The proposed method maintains an accuracy of the mapping while reducing the computational cost by downsampling the points used for the iterative closest point. An experimental demonstration using a mobile robot test bed confirms the usefulness of the proposed method.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2006 ◽  
Vol 24 (5) ◽  
pp. 605-613 ◽  
Author(s):  
Shinichi Okusako ◽  
Shigeyuki Sakane

2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 33 (1) ◽  
pp. 33-43
Author(s):  
Kazuhiro Funato ◽  
Ryosuke Tasaki ◽  
Hiroto Sakurai ◽  
Kazuhiko Terashima ◽  
◽  
...  

The authors have been developing a mobile robot to assist doctors in hospitals in managing medical tools and patient electronic medical records. The robot tracks behind a mobile medical worker while maintaining a constant distance from the worker. However, it was difficult to detect objects in the sensor’s invisible region, called occlusion. In this study, we propose a sensor fusion method to estimate the position of a robot tracking target indirectly by an inertial measurement unit (IMU) in addition to the direct measurement by an laser range finder (LRF) and develop a human tracking system to avoid occlusion by a mobile robot. Based on this, we perform detailed experimental verification of tracking a specified person to verify the validity of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document