scholarly journals A one decade survey of autonomous mobile robot systems

Author(s):  
Noor Abdul Khaleq Zghair ◽  
Ahmed S. Al-Araji

<span lang="EN-US">Recently, autonomous mobile robots have gained popularity in the modern world due to their relevance technology and application in real world situations. The global market for mobile robots will grow significantly over the next 20 years. Autonomous mobile robots are found in many fields including institutions, industry, business, hospitals, agriculture as well as private households for the purpose of improving day-to-day activities and services. The development of technology has increased in the requirements for mobile robots because of the services and tasks provided by them, like rescue and research operations, surveillance, carry heavy objects and so on. Researchers have conducted many works on the importance of robots, their uses, and problems. This article aims to analyze the control system of mobile robots and the way robots have the ability of moving in real-world to achieve their goals. It should be noted that there are several technological directions in a mobile robot industry. It must be observed and integrated so that the robot functions properly: Navigation systems, localization systems, detection systems (sensors) along with motion and kinematics and dynamics systems. All such systems should be united through a control unit; thus, the mission or work of mobile robots are conducted with reliability.</span>

2014 ◽  
Vol 26 (2) ◽  
pp. 185-195 ◽  
Author(s):  
Masanobu Saito ◽  
◽  
Kentaro Kiuchi ◽  
Shogo Shimizu ◽  
Takayuki Yokota ◽  
...  

This paper describes navigation systems for autonomous mobile robots taking part in the real-world Tsukuba Challenge 2013 robot competition. Tsukuba Challenge 2013 enables any information on the route to be collected beforehand and used on the day of the challenge. At the same time, however, autonomous mobile robots should function appropriately in daily human life even in areas where they have never been before. System thus need not capture pre-driving details. We analyzed traverses in complex urban areas without prior environmental information using light detection and ranging (LIDAR). We also determined robot status, such as its position and orientation using the gauss maps derived from LIDAR without gyro sensors. Dead reckoning was combined with wheel odometry and orientation from above. We corrected 2D robot poses by matching electronics maps from the Web. Because drift inevitably causes errors, slippage and failure, etc., our robot also traced waypoints derived beforehand from the same electronics map, so localization is consistent even if we do not drive through an area ahead of time. Trajectory candidates are generated along global planning routes based on these waypoints and an optimal trajectory is selected. Tsukuba Challenge 2013 required that robots find specified human targets indicated by features released on the Web. To find the target correctly without driving in Tsukuba beforehand, we searched for point cloud clusters similar to specified human targets based on predefined features. These point clouds were then projected on the camera image at the time, and we extracted points of interest such as SURF to apply fast appearance-based mapping (FAB-MAP). This enabled us to find specified targets highly accurately. To demonstrate the feasibility of our system, experiments were conducted over our university route and over that in the Tsukuba Challenge.


2015 ◽  
Vol 27 (4) ◽  
pp. 317-317 ◽  
Author(s):  
Yoshihiro Takita ◽  
Shin’ichi Yuta ◽  
Takashi Tsubouchi ◽  
Koichi Ozaki

The first Tsukuba Challenge started in 2007 as a technological challenge for autonomous mobile robots moving around on city walkways. A task was then added involving the search for certain persons. In these and other ways, the challenge provides a test field for developing positive relationships between mobile robots and human beings. To make progress an autonomous robotic research, this special issue details and clarifies technological problems and solutions found by participants in the challenge. We sincerely thank the authors and reviewers for this chance to work with them in these important areas.


Author(s):  
Gintautas Narvydas ◽  
Vidas Raudonis ◽  
Rimvydas Simutis

In the control of autonomous mobile robots there exist two types of control: global control and local control. The requirement to solve global and local tasks arises respectively. This chapter concentrates on local tasks and shows that robots can learn to cope with some local tasks within minutes. The main idea of the chapter is to show that, while creating intelligent control systems for autonomous mobile robots, the beginning is most important as we have to transfer as much as possible human knowledge and human expert-operator skills into the intelligent control system. Successful transfer ensures fast and good results. One of the most advanced techniques in robotics is an autonomous mobile robot on-line learning from the experts’ demonstrations. Further, the latter technique is briefly described in this chapter. As an example of local task the wall following is taken. The main goal of our experiment is to teach the autonomous mobile robot within 10 minutes to follow the wall of the maze as fast and as precisely as it is possible. This task also can be transformed to the obstacle circuit on the left or on the right. The main part of the suggested control system is a small Feed-Forward Artificial Neural Network. In some particular cases – critical situations – “If-Then” rules undertake the control, but our goal is to minimize possibility that these rules would start controlling the robot. The aim of the experiment is to implement the proposed technique on the real robot. This technique enables to reach desirable capabilities in control much faster than they would be reached using Evolutionary or Genetic Algorithms, or trying to create the control systems by hand using “If-Then” rules or Fuzzy Logic. In order to evaluate the quality of the intelligent control system to control an autonomous mobile robot we calculate objective function values and the percentage of the robot work loops when “If-Then” rules control the robot.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Caihong Li ◽  
Yong Song ◽  
Fengying Wang ◽  
Zhenying Liang ◽  
Baoyan Zhu

This paper proposes a fusion iterations strategy based on the Standard map to generate a chaotic path planner of the mobile robot for surveillance missions. The distances of the chaotic trajectories between the adjacent iteration points which are produced by the Standard map are too large for the robot to track. So a fusion iterations strategy combined with the large region iterations and the small grids region iterations is designed to resolve the problem. The small region iterations perform the iterations of the Standard map in the divided small grids, respectively. It can reduce the adjacent distances by dividing the whole surveillance workspace into small grids. The large region iterations combine all the small grids region iterations into a whole, switch automatically among the small grids, and maintain the chaotic characteristics of the robot to guarantee the surveillance missions. Compared to simply using the Standard map in the whole workspace, the proposed strategy can decrease the adjacent distances according to the divided size of the small grids and is convenient for the robot to track.


2016 ◽  
Vol 28 (4) ◽  
pp. 461-469 ◽  
Author(s):  
Tomoyoshi Eda ◽  
◽  
Tadahiro Hasegawa ◽  
Shingo Nakamura ◽  
Shin’ichi Yuta

[abstFig src='/00280004/04.jpg' width='300' text='Autonomous mobile robots entered in the Tsukuba Challenge 2015' ] This paper describes a self-localization method for autonomous mobile robots entered in the Tsukuba Challenge 2015. One of the important issues in autonomous mobile robots is accurately estimating self-localization. An occupancy grid map, created manually before self-localization has typically been utilized to estimate the self-localization of autonomous mobile robots. However, it is difficult to create an accurate map of complex courses. We created an occupancy grid map combining local grid maps built using a leaser range finder (LRF) and wheel odometry. In addition, the self-localization of a mobile robot was calculated by integrating self-localization estimated by a map and matching it to wheel odometry information. The experimental results in the final run of the Tsukuba Challenge 2015 showed that the mobile robot traveled autonomously until the 600 m point of the course, where the occupancy grid map ended.


Author(s):  
KS Nagla ◽  
Moin Uddin ◽  
Dilbag Singh

<p>Sensor based perception of the environment is an emerging area of the mobile robot research where sensors play a pivotal role. For autonomous mobile robots, the fundamental requirement is the convergent of the range information in to high level internal representation. Internal representation in the form of occupancy grid is commonly used in autonomous mobile robots due to its various advantages. There are several sensors such as vision sensor, laser rage finder, and ultrasonic and infrared sensors etc. play roles in mapping. However the sensor information failure, sensor inaccuracies, noise, and slow response are the major causes of an error in the mapping. To improve the reliability of the mobile robot mapping multisensory data fusion is considered as an optimal solution. This paper presents a novel architecture of sensor fusion frame work in which a dedicated filter (DF) is proposed to increase the robustness of the occupancy grid for indoor environment. The technique has been experimentally verified for different indoor test environments. The proposed configuration shows improvement in the occupancy grid with the implementation of dedicated filters.</p>


Author(s):  
Lee Gim Hee ◽  
Marcelo H. Ang Jr.

The development of autonomous mobile robots is continuously gaining importance particularly in the military for surveillance as well as in industry for inspection and material handling tasks. Another emerging market with enormous potential is mobile robots for entertainment. A fundamental requirement for autonomous mobile robots in most of its applications is the ability to navigate from a point of origin to a given goal. The mobile robot must be able to generate a collision-free path that connects the point of origin and the given goal. Some of the key algorithms for mobile robot navigation will be discussed in this article.


2015 ◽  
Vol 2 (1-2.) ◽  
Author(s):  
Gergely Nagymáté

The spreading of mobile robots is getting more significant nowadays. This is due to their ability to perform tasks that are dangerous, uncomfortable or impossible to people. The mobile robot must be endowed with a wide variety of sensors (cameras, microphones, proximity sensors, etc.) and processing units that makes them able to navigate in their environment. This generally carried out with unique, small series produced and thus expensive equipment. This paper describes the concept of a mobile robot with a control unit integrating the processing and the main sensor functionalities into one mass produced device, an Android smartphone. The robot is able to perform tasks such as tracking colored objects or human faces and orient itself. In the meantime, it avoids obstacles and keeps the distance between the target and itself. It is able to verbally communicate wit.


2015 ◽  
Vol 27 (4) ◽  
pp. 392-400 ◽  
Author(s):  
Keita Kurashiki ◽  
◽  
Mareus Aguilar ◽  
Sakon Soontornvanichkit

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/09.jpg"" width=""300"" /> Mobile robot with a stereo camera</div> Autonomous mobile robots has been an active research recently. In Japan, the Tsukuba Challenge is held annually since 2007 in order to realize autonomous mobile robots that coexist with human beings safely in society. Through technological incentives of such effort, laser range finder (LRF) based navigation has rapidly improved. A technical issue of these techniques is the reduction of the prior information because most of them require precise 3D model of the environment, that is poor in both maintainability and scalability. On the other hand, in spite of intensive studies on vision based navigation using cameras, no robot in the Challenge could achieve full camera navigation. In this paper, an image based control law to follow the road boundary is proposed. This method is a part of the topological navigation to reduce prior information and enhance scalability of the map. As the controller is designed based on the interaction model of the robot motion and image feature in the front image, the method is robust to the camera calibration error. The proposed controller is tested through several simulations and indoor/outdoor experiments to verify its performance and robustness. Finally, our results in Tsukuba Challenge 2014 using the proposed controller is presented. </span>


2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.


Sign in / Sign up

Export Citation Format

Share Document