Autonomous Mobile Robots and Development of Vision Based Automotive Assistance Systems

2011 ◽  
Vol 121-126 ◽  
pp. 2416-2420 ◽  
Author(s):  
Yan Fen Mao ◽  
Hans Wiedmann ◽  
Ming Chen

The paper describes an autonomous mobile robot named Robotino®, and how it is used for education of Bachelor-students in the majors AES (Automotive Engineering & Service) as well as in MT (Mechatronics) in CDHAW, Tongji University. A fine positioning project using image processing is introduced, and vision-based functions from Robotino®View are presented. It is sketched out how this system also can be used as a research platform for automotive assistance systems.

Author(s):  
Gintautas Narvydas ◽  
Vidas Raudonis ◽  
Rimvydas Simutis

In the control of autonomous mobile robots there exist two types of control: global control and local control. The requirement to solve global and local tasks arises respectively. This chapter concentrates on local tasks and shows that robots can learn to cope with some local tasks within minutes. The main idea of the chapter is to show that, while creating intelligent control systems for autonomous mobile robots, the beginning is most important as we have to transfer as much as possible human knowledge and human expert-operator skills into the intelligent control system. Successful transfer ensures fast and good results. One of the most advanced techniques in robotics is an autonomous mobile robot on-line learning from the experts’ demonstrations. Further, the latter technique is briefly described in this chapter. As an example of local task the wall following is taken. The main goal of our experiment is to teach the autonomous mobile robot within 10 minutes to follow the wall of the maze as fast and as precisely as it is possible. This task also can be transformed to the obstacle circuit on the left or on the right. The main part of the suggested control system is a small Feed-Forward Artificial Neural Network. In some particular cases – critical situations – “If-Then” rules undertake the control, but our goal is to minimize possibility that these rules would start controlling the robot. The aim of the experiment is to implement the proposed technique on the real robot. This technique enables to reach desirable capabilities in control much faster than they would be reached using Evolutionary or Genetic Algorithms, or trying to create the control systems by hand using “If-Then” rules or Fuzzy Logic. In order to evaluate the quality of the intelligent control system to control an autonomous mobile robot we calculate objective function values and the percentage of the robot work loops when “If-Then” rules control the robot.


2016 ◽  
Vol 28 (4) ◽  
pp. 461-469 ◽  
Author(s):  
Tomoyoshi Eda ◽  
◽  
Tadahiro Hasegawa ◽  
Shingo Nakamura ◽  
Shin’ichi Yuta

[abstFig src='/00280004/04.jpg' width='300' text='Autonomous mobile robots entered in the Tsukuba Challenge 2015' ] This paper describes a self-localization method for autonomous mobile robots entered in the Tsukuba Challenge 2015. One of the important issues in autonomous mobile robots is accurately estimating self-localization. An occupancy grid map, created manually before self-localization has typically been utilized to estimate the self-localization of autonomous mobile robots. However, it is difficult to create an accurate map of complex courses. We created an occupancy grid map combining local grid maps built using a leaser range finder (LRF) and wheel odometry. In addition, the self-localization of a mobile robot was calculated by integrating self-localization estimated by a map and matching it to wheel odometry information. The experimental results in the final run of the Tsukuba Challenge 2015 showed that the mobile robot traveled autonomously until the 600 m point of the course, where the occupancy grid map ended.


2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.


2011 ◽  
Vol 338 ◽  
pp. 731-736
Author(s):  
Ying Yu ◽  
Yan Fen Mao ◽  
Hans Wiedmann ◽  
Yu Wang

In CDHAW of Tongji University an autonomous mobile robot system, is actually used for education of Bachelor students in the majors AES (Automotive Engineering & Service) as well as in MT (Mechatronics) and for development and research on different topics of both majors. One such research project is introduced in this paper. It aims to improve the inaccuracy of the robots built-in navigation system based upon odometry by development of a flying referencing system. It is described how the well known raster navigation is extended from a single point raster to a grid of lines which improves navigation correctness much better as by using the single points of the conventional raster navigation method.


Author(s):  
Casi Setianingsih ◽  
Kusprasapta Mutijarsa ◽  
Muhammad Ary Murti

Autonomous robot adalah suatu robot yang mampu bekerja secara mandiri tanpa pengendalian langsung dari manusia. Robot bekerja berdasarkan sensor-sensor yang dimilikinya, mengambil keputusan sendiri untuk menyelesaikan misi dalam lingkungan kerjanya. Dalam dunia nyata, lingkungan kerja robot sangat dinamis, selalu berubah, dan tidak terstruktur. Membuat suatu model lingkungan yang tidak terstruktur sangat sulit. Memperoleh model matematik yang tepat dari lingkungan seperti ini hampir tidak mungkin dilakukan. Untuk membuat suatu autonomous mobile robots yang mampu bekerja pada lingkungan yang tidak terstruktur dan dinamis,diperlukansuatumetodatertentuyangadaptifdanmampubelajar. Berdasarkan permasalahan tersebut maka pada riset ini dirancang suatu autonomous mobile robot dengan arsitektur berbasis perilaku yang dapat belajar dan bekerja secara mandiri pada lingkungan yang tidak terstruktur, menggunakan metoda Reinforcement Learning. Tujuan metoda ini diterapkan agar robot mampu belajar dan beradaptasi terhadap lingkungan yang tidak terstruktur. Selanjutnya robot dikembangkan agar mampu menyelesaikan misi menemukan target pada posisi tertentu berdasarkan informasi yang diperoleh dari sensor sensor yang ada. Hasil simulasi menunjukan bahwa algoritma pembelajaran Reinforcement Learning berhasil diterapkan pada arsitektur kendali berbasis perilaku di autonomous mobile robot dengan akurasi sebesar 85,71%.


Author(s):  
Evangelos Georgiou ◽  
Jian S. Dai ◽  
Michael Luck

In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Caihong Li ◽  
Yong Song ◽  
Fengying Wang ◽  
Zhenying Liang ◽  
Baoyan Zhu

This paper proposes a fusion iterations strategy based on the Standard map to generate a chaotic path planner of the mobile robot for surveillance missions. The distances of the chaotic trajectories between the adjacent iteration points which are produced by the Standard map are too large for the robot to track. So a fusion iterations strategy combined with the large region iterations and the small grids region iterations is designed to resolve the problem. The small region iterations perform the iterations of the Standard map in the divided small grids, respectively. It can reduce the adjacent distances by dividing the whole surveillance workspace into small grids. The large region iterations combine all the small grids region iterations into a whole, switch automatically among the small grids, and maintain the chaotic characteristics of the robot to guarantee the surveillance missions. Compared to simply using the Standard map in the whole workspace, the proposed strategy can decrease the adjacent distances according to the divided size of the small grids and is convenient for the robot to track.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881263 ◽  
Author(s):  
Paul Quillen ◽  
Kamesh Subbarao

This article puts forth a framework using model-based techniques for path planning and guidance for an autonomous mobile robot in a constrained environment. The path plan is synthesized using a numerical navigation function algorithm that will form its potential contour levels based on the “minimum control effort.” Then, an improved nonlinear model predictive control approach is employed to generate high-level guidance commands for the mobile robot to track a trajectory fitted along the planned path leading to the goal. A backstepping-like nonlinear guidance law is also implemented for comparison with the NMPC formulation. Finally, the performance of the resulting framework using both nonlinear guidance techniques is verified in simulation where the environment is constrained by the presence of static obstacles.


1991 ◽  
Vol 3 (5) ◽  
pp. 379-386
Author(s):  
Hesin Sai ◽  
◽  
Yoshikuni Okawa

As part of a guidance system for mobile robots operating on a wide and flat floor, such as an ordinary factory or a gymnasium, we have proposed a special-purpose sign. It consists of a cylinder, with four slits, and a fluorescent light, which is placed on the axis of the cylinder. Two of the slits are parallel to each other, and the other two are angled. A robot obtains an image of the sign with a TV camera. After thresholding, we have four bright sets of pixels which correspond to the four slits of the cylinder. We compute by measuring the relative distances between the four points, the distance and the angle to the direction of the sign can be computed using simple geometrical equations. Using a personal computer with an image processing capability, we have investigated the accuracy of the proposed position identification method and compared the experimental results against the theoretical analysis of measured error. The data shows good coincidence between the analysis and the experiments. Finally, we have built a movable robot, which has three microprocessors and a TV camera, and performed several control experiments for trajectory following.


Sign in / Sign up

Export Citation Format

Share Document