Smartphone Zombie Detection From LiDAR Point Cloud for Mobile Robot Safety

2020 ◽  
Vol 5 (2) ◽  
pp. 2256-2263 ◽  
Author(s):  
Jiaxu Wu ◽  
Yusuke Tamura ◽  
Yusheng Wang ◽  
Hanwool Woo ◽  
Alessandro Moro ◽  
...  
2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


2015 ◽  
Vol 51 (10) ◽  
pp. 752-754 ◽  
Author(s):  
W.S. Yoo ◽  
J.B. Park ◽  
B.H. Lee

10.5772/62342 ◽  
2016 ◽  
Vol 13 (2) ◽  
pp. 52 ◽  
Author(s):  
Yanzi Miao ◽  
Yang Liu ◽  
Hongbin Ma ◽  
Huijie Jin

2015 ◽  
Vol 27 (4) ◽  
pp. 356-364 ◽  
Author(s):  
Masatoshi Nomatsu ◽  
◽  
Youhei Suganuma ◽  
Yosuke Yui ◽  
Yutaka Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/05.jpg"" width=""200"" /> Developed autonomous mobile robot</div> In describing real-world self-localization and target-search methods, this paper discusses a mobile robot developed to verify a method proposed in Tsukuba Challenge 2014. The Tsukaba Challenge course includes promenades and parks containing ordinary pedestrians and bicyclists that require the robot to move toward a goal while avoiding the moving objects around it. Common self-localization methods often include 2D laser range finders (LRFs), but such LRFs do not always capture enough data for localization if, for example, the scanned plane has few landmarks. To solve this problem, we used a three-dimensional (3D) LRF for self-localization. The 3D LRF captures more data than the 2D type, resulting in more robust localization. Robots that provide practical services in real life must, among other functions, recognize a target and serve it autonomously. To enable robots to do so, this paper describes a method for searching for a target by using a cluster point cloud from the 3D LRF together with image processing of colored images captured by cameras. In Tsukuba Challenge 2014, the robot we developed providing the proposed methods completed the course and found the targets, verifying the effectiveness of our proposals. </span>


2018 ◽  
Vol 30 (1) ◽  
pp. 65-75 ◽  
Author(s):  
Shodai Deguchi ◽  
◽  
Genya Ishigami

This paper proposes a computationally efficient method for generating a three-dimensional environment map and estimating robot position. The proposed method assumes that a laser range finder mounted on a mobile robot can be used to provide a set of point cloud data of an environment around the mobile robot. The proposed method then extracts typical feature points from the point cloud data using an intensity image taken by the laser range finder. Subsequently, feature points extracted from two or more different sets of point cloud data are correlated by the iterative closest point algorithm that matches the points between the sets, creating a large map of the environment as well as estimating robot location in the map. The proposed method maintains an accuracy of the mapping while reducing the computational cost by downsampling the points used for the iterative closest point. An experimental demonstration using a mobile robot test bed confirms the usefulness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document