Topological navigation of mobile robot using ID tag and WEB camera

Author(s):  
Weiguo Lin ◽  
Songmin Jia ◽  
Fei Yang ◽  
K. Takase
2009 ◽  
Vol 147-149 ◽  
pp. 35-42 ◽  
Author(s):  
Dzmitry Kaliukhovich ◽  
Vladimir Golovko ◽  
Andreas Paczynski

In the paper, we present the mobile robot “MAX” developed at the Systems Engineering Laboratory, Hochschule Ravensburg-Weingarten, which serves as a mock-up of an automated guided vehicle intended for transportation of materials, and connection between different parts of a production line in industry. The subject of the paper is autonomous robot motion along a specified track formed with a one-colored insulation tape marked on the surface, on which the robot moves. The paper focuses on track detection provided by real-time processing of the video stream from an installed on the front of the robot web-camera that is the key feature of this work. In the paper, brief overview on the construction of the mobile robot “MAX” is introduced, the task of line following and motion criteria are formulated, some approaches to track detection and its presentation with mean points are proposed. Three motion control algorithms which are ensued are also presented and verified with experiments on the mobile robot “MAX” to show their real appropriateness.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141667813 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra Carolina Hernandez ◽  
Jonathan Crespo ◽  
Ramon Barber

The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.


JOUTICA ◽  
2017 ◽  
Vol 2 (2) ◽  
Author(s):  
Samsul Arifin ◽  
Erwien Tjipta Wijaya

In this research will be developed autonomous Mobile Robot navigation system, using vision sensor in the form of web camera. The ability of the robot to find the path, avoiding obstacles in an indoor environment becomes the key to the success of navigation. One of the things that underlies the robot navigation system is the process of image information processing from a web camera. So it takes a method that can process image information from the web camera into image data more easily read by the computer. The method that can be used to solve this problem is Canny Edge Detection. Canny Edge Detection has some of the most optimum edge detection criteria that localize the image well, detect objects well and clear response. With these advantages, the Canny Edge Detection method can produce a more representative image approaching the real object. After the edge detection process is completed then the next step is to identify and identify paths or obstacles that exist. Paths and obstructions that have been identified will be used as models to determine which direction the robot will run. The whole process of computing and control will be done using Raspberry pi, while for image processing using OpenCV application.


Sign in / Sign up

Export Citation Format

Share Document