Development of Mobile Robot Elevator Utility System

1999 ◽  
Vol 11 (1) ◽  
pp. 78-85 ◽  
Author(s):  
Kazuhiro Mima ◽  
◽  
Masahiro Endou ◽  
Aiguo Ming ◽  
Chisato Kanamori ◽  
...  

This paper describes an elevator utility system that enables autonomous mobile robots to travel in office buildings. Attachments emulating the behavior of human fingers were developed to retrofit them to elevators without affecting elevators' inner workings. These attachments, consisting of button operation, controllers, and infrared ray communication, are remotely controlled by wireless commands from a robot. Mobile robots must use elevators without interfering with people using it, proposingcourses of robot action. A sensor system is presented for detecting people or objects in elevators. A prototype was developed, and its usefulness verified experimentally. The concept is expected to be useful for service robots working in office buildings.

2010 ◽  
Vol 7 ◽  
pp. 109-117
Author(s):  
O.V. Darintsev ◽  
A.B. Migranov ◽  
B.S. Yudintsev

The article deals with the development of a high-speed sensor system for a mobile robot, used in conjunction with an intelligent method of planning trajectories in conditions of high dynamism of the working space.


Energies ◽  
2018 ◽  
Vol 12 (1) ◽  
pp. 27 ◽  
Author(s):  
Linfei Hou ◽  
Liang Zhang ◽  
Jongwon Kim

To improve the energy efficiency of a mobile robot, a novel energy modeling method for mobile robots is proposed in this paper. The robot can calculate and predict energy consumption through the energy model, which provides a guide to facilitate energy-efficient strategies. The energy consumption of the mobile robot is first modeled by considering three major factors: the sensor system, control system, and motion system. The relationship between the three systems is elaborated by formulas. Then, the model is utilized and experimentally tested in a four-wheeled Mecanum mobile robot. Furthermore, the power measurement methods are discussed. The energy consumption of the sensor system and control system was at the milliwatt level, and a Monsoon power monitor was used to accurately measure the electrical power of the systems. The experimental results showed that the proposed energy model can be used to predict the energy consumption of the robot movement processes in addition to being able to efficiently support the analysis of the energy consumption characteristics of mobile robots.


Author(s):  
Gintautas Narvydas ◽  
Vidas Raudonis ◽  
Rimvydas Simutis

In the control of autonomous mobile robots there exist two types of control: global control and local control. The requirement to solve global and local tasks arises respectively. This chapter concentrates on local tasks and shows that robots can learn to cope with some local tasks within minutes. The main idea of the chapter is to show that, while creating intelligent control systems for autonomous mobile robots, the beginning is most important as we have to transfer as much as possible human knowledge and human expert-operator skills into the intelligent control system. Successful transfer ensures fast and good results. One of the most advanced techniques in robotics is an autonomous mobile robot on-line learning from the experts’ demonstrations. Further, the latter technique is briefly described in this chapter. As an example of local task the wall following is taken. The main goal of our experiment is to teach the autonomous mobile robot within 10 minutes to follow the wall of the maze as fast and as precisely as it is possible. This task also can be transformed to the obstacle circuit on the left or on the right. The main part of the suggested control system is a small Feed-Forward Artificial Neural Network. In some particular cases – critical situations – “If-Then” rules undertake the control, but our goal is to minimize possibility that these rules would start controlling the robot. The aim of the experiment is to implement the proposed technique on the real robot. This technique enables to reach desirable capabilities in control much faster than they would be reached using Evolutionary or Genetic Algorithms, or trying to create the control systems by hand using “If-Then” rules or Fuzzy Logic. In order to evaluate the quality of the intelligent control system to control an autonomous mobile robot we calculate objective function values and the percentage of the robot work loops when “If-Then” rules control the robot.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Caihong Li ◽  
Yong Song ◽  
Fengying Wang ◽  
Zhenying Liang ◽  
Baoyan Zhu

This paper proposes a fusion iterations strategy based on the Standard map to generate a chaotic path planner of the mobile robot for surveillance missions. The distances of the chaotic trajectories between the adjacent iteration points which are produced by the Standard map are too large for the robot to track. So a fusion iterations strategy combined with the large region iterations and the small grids region iterations is designed to resolve the problem. The small region iterations perform the iterations of the Standard map in the divided small grids, respectively. It can reduce the adjacent distances by dividing the whole surveillance workspace into small grids. The large region iterations combine all the small grids region iterations into a whole, switch automatically among the small grids, and maintain the chaotic characteristics of the robot to guarantee the surveillance missions. Compared to simply using the Standard map in the whole workspace, the proposed strategy can decrease the adjacent distances according to the divided size of the small grids and is convenient for the robot to track.


2016 ◽  
Vol 28 (4) ◽  
pp. 461-469 ◽  
Author(s):  
Tomoyoshi Eda ◽  
◽  
Tadahiro Hasegawa ◽  
Shingo Nakamura ◽  
Shin’ichi Yuta

[abstFig src='/00280004/04.jpg' width='300' text='Autonomous mobile robots entered in the Tsukuba Challenge 2015' ] This paper describes a self-localization method for autonomous mobile robots entered in the Tsukuba Challenge 2015. One of the important issues in autonomous mobile robots is accurately estimating self-localization. An occupancy grid map, created manually before self-localization has typically been utilized to estimate the self-localization of autonomous mobile robots. However, it is difficult to create an accurate map of complex courses. We created an occupancy grid map combining local grid maps built using a leaser range finder (LRF) and wheel odometry. In addition, the self-localization of a mobile robot was calculated by integrating self-localization estimated by a map and matching it to wheel odometry information. The experimental results in the final run of the Tsukuba Challenge 2015 showed that the mobile robot traveled autonomously until the 600 m point of the course, where the occupancy grid map ended.


Author(s):  
KS Nagla ◽  
Moin Uddin ◽  
Dilbag Singh

<p>Sensor based perception of the environment is an emerging area of the mobile robot research where sensors play a pivotal role. For autonomous mobile robots, the fundamental requirement is the convergent of the range information in to high level internal representation. Internal representation in the form of occupancy grid is commonly used in autonomous mobile robots due to its various advantages. There are several sensors such as vision sensor, laser rage finder, and ultrasonic and infrared sensors etc. play roles in mapping. However the sensor information failure, sensor inaccuracies, noise, and slow response are the major causes of an error in the mapping. To improve the reliability of the mobile robot mapping multisensory data fusion is considered as an optimal solution. This paper presents a novel architecture of sensor fusion frame work in which a dedicated filter (DF) is proposed to increase the robustness of the occupancy grid for indoor environment. The technique has been experimentally verified for different indoor test environments. The proposed configuration shows improvement in the occupancy grid with the implementation of dedicated filters.</p>


Author(s):  
Lee Gim Hee ◽  
Marcelo H. Ang Jr.

The development of autonomous mobile robots is continuously gaining importance particularly in the military for surveillance as well as in industry for inspection and material handling tasks. Another emerging market with enormous potential is mobile robots for entertainment. A fundamental requirement for autonomous mobile robots in most of its applications is the ability to navigate from a point of origin to a given goal. The mobile robot must be able to generate a collision-free path that connects the point of origin and the given goal. Some of the key algorithms for mobile robot navigation will be discussed in this article.


2015 ◽  
Vol 27 (4) ◽  
pp. 392-400 ◽  
Author(s):  
Keita Kurashiki ◽  
◽  
Mareus Aguilar ◽  
Sakon Soontornvanichkit

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/09.jpg"" width=""300"" /> Mobile robot with a stereo camera</div> Autonomous mobile robots has been an active research recently. In Japan, the Tsukuba Challenge is held annually since 2007 in order to realize autonomous mobile robots that coexist with human beings safely in society. Through technological incentives of such effort, laser range finder (LRF) based navigation has rapidly improved. A technical issue of these techniques is the reduction of the prior information because most of them require precise 3D model of the environment, that is poor in both maintainability and scalability. On the other hand, in spite of intensive studies on vision based navigation using cameras, no robot in the Challenge could achieve full camera navigation. In this paper, an image based control law to follow the road boundary is proposed. This method is a part of the topological navigation to reduce prior information and enhance scalability of the map. As the controller is designed based on the interaction model of the robot motion and image feature in the front image, the method is robust to the camera calibration error. The proposed controller is tested through several simulations and indoor/outdoor experiments to verify its performance and robustness. Finally, our results in Tsukuba Challenge 2014 using the proposed controller is presented. </span>


2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.


Sign in / Sign up

Export Citation Format

Share Document