Development of an Autonomous Mobile Robot with Self-Localization and Searching Target in a Real Environment

2015 ◽  
Vol 27 (4) ◽  
pp. 356-364 ◽  
Author(s):  
Masatoshi Nomatsu ◽  
◽  
Youhei Suganuma ◽  
Yosuke Yui ◽  
Yutaka Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/05.jpg"" width=""200"" /> Developed autonomous mobile robot</div> In describing real-world self-localization and target-search methods, this paper discusses a mobile robot developed to verify a method proposed in Tsukuba Challenge 2014. The Tsukaba Challenge course includes promenades and parks containing ordinary pedestrians and bicyclists that require the robot to move toward a goal while avoiding the moving objects around it. Common self-localization methods often include 2D laser range finders (LRFs), but such LRFs do not always capture enough data for localization if, for example, the scanned plane has few landmarks. To solve this problem, we used a three-dimensional (3D) LRF for self-localization. The 3D LRF captures more data than the 2D type, resulting in more robust localization. Robots that provide practical services in real life must, among other functions, recognize a target and serve it autonomously. To enable robots to do so, this paper describes a method for searching for a target by using a cluster point cloud from the 3D LRF together with image processing of colored images captured by cameras. In Tsukuba Challenge 2014, the robot we developed providing the proposed methods completed the course and found the targets, verifying the effectiveness of our proposals. </span>

2020 ◽  
Vol 12 (12) ◽  
pp. 1908
Author(s):  
Tzu-Yi Chuang ◽  
Jen-Yu Han ◽  
Deng-Jie Jhan ◽  
Ming-Der Yang

Moving object detection and tracking from image sequences has been extensively studied in a variety of fields. Nevertheless, observing geometric attributes and identifying the detected objects for further investigation of moving behavior has drawn less attention. The focus of this study is to determine moving trajectories, object heights, and object recognition using a monocular camera configuration. This paper presents a scheme to conduct moving object recognition with three-dimensional (3D) observation using faster region-based convolutional neural network (Faster R-CNN) with a stationary and rotating Pan Tilt Zoom (PTZ) camera and close-range photogrammetry. The camera motion effects are first eliminated to detect objects that contain actual movement, and a moving object recognition process is employed to recognize the object classes and to facilitate the estimation of their geometric attributes. Thus, this information can further contribute to the investigation of object moving behavior. To evaluate the effectiveness of the proposed scheme quantitatively, first, an experiment with indoor synthetic configuration is conducted, then, outdoor real-life data are used to verify the feasibility based on recall, precision, and F1 index. The experiments have shown promising results and have verified the effectiveness of the proposed method in both laboratory and real environments. The proposed approach calculates the height and speed estimates of the recognized moving objects, including pedestrians and vehicles, and shows promising results with acceptable errors and application potential through existing PTZ camera images at a very low cost.


2008 ◽  
Vol 20 (2) ◽  
pp. 213-220 ◽  
Author(s):  
Kimitoshi Yamazaki ◽  
◽  
Takashi Tsubouchi ◽  
Masahiro Tomono ◽  
◽  
...  

In this paper, a modeling method to handle furniture is proposed. Real-life environments are crowded with objects such as drawers and cabinets that, while easily dealt with by people, present mobile robots with problems. While it is to be hoped that robots will assist in multiple daily tasks such as putting objects in into drawers, the major problems lies in providing robots with knowledge about the environment efficiently and, if possible, autonomously.If mobile robots can handle these furniture autonomously, it is expected that multiple daily jobs, for example, storing a small object in a drawer, can be performed by the robots. However, it is a perplexing process to give several pieces of knowledge about the furniture to the robots manually. In our approach, by utilizing sensor data from a camera and a laser range finder which are combined with direct teaching, a handling model can be created not only how to handle the furniture but also an appearance and 3D shape. Experimental results show the effectiveness of our methods.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1503 ◽  
Author(s):  
Bin Zhang ◽  
Masahide Kaneko ◽  
Hun-ok Lim

In order to move around automatically, mobile robots usually need to recognize their working environment first. Simultaneous localization and mapping (SLAM) has become an important research field recently, by which the robot can generate a map while moving around. Both two-dimensional (2D) mapping and three-dimensional (3D) mapping methods have been developed greatly with high accuracy. However, 2D maps cannot reflect the space information of the environment and 3D mapping needs long processing time. Moreover, conventional SLAM methods based on grid maps take a long time to delete the moving objects from the map and are hard to delete the potential moving objects. In this paper, a 2D mapping method integrating with 3D information based on immobile area occupied grid maps is proposed. Objects in 3D space are recognized and their space information (e.g., shapes) and properties (moving objects or potential moving objects like people standing still) are projected to the 2D plane for updating the 2D map. By using the immobile area occupied grid map method, recognized still objects are reflected to the map quickly by updating the immobile area occupancy probability with a high coefficient. Meanwhile, recognized moving objects and potential moving objects are not used for updating the map. The unknown objects are reflected to the 2D map with a lower immobile area occupancy probability so that they can be deleted quickly once they are recognized as moving objects or start to move. The effectiveness of our method is proven by experiments of mapping under dynamic indoor environment using a mobile robot.


2010 ◽  
Vol 43 (16) ◽  
pp. 563-568
Author(s):  
Filippo Bonaccorso ◽  
Francesco Catania ◽  
Giovanni Muscato

1991 ◽  
Vol 26 (1-3) ◽  
pp. 453-458 ◽  
Author(s):  
C. Fröhlich ◽  
F. Freyberger ◽  
G. Schmidt

Sign in / Sign up

Export Citation Format

Share Document