scholarly journals A Distributed Vision-Based Navigation System for Khepera IV Mobile Robots

Author(s):  
Gonzalo Farias ◽  
Ernesto Fabregas ◽  
Enrique Torres ◽  
Gaetan Bricas ◽  
Sebastián Dormido-Canto ◽  
...  

This work presents the development and implementation of a distributed navigation system based on computer vision. The autonomous system consists of a wheeled mobile robot with an integrated colour camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that processes them and calculates the corresponding speeds of the robot using a cascade of trained classifiers. These speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. The classifier cascade should be trained before experimentation with two sets of positive and negative images. The number of images in these sets should be considered to limit the training stage time and avoid overtraining the system.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5409
Author(s):  
Gonzalo Farias ◽  
Ernesto Fabregas ◽  
Enrique Torres ◽  
Gaëtan Bricas ◽  
Sebastián Dormido-Canto ◽  
...  

This work presents the development and implementation of a distributed navigation system based on object recognition algorithms. The main goal is to introduce advanced algorithms for image processing and artificial intelligence techniques for teaching control of mobile robots. The autonomous system consists of a wheeled mobile robot with an integrated color camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that performs a computer vision algorithm to recognize the objects. The computer calculates the corresponding speeds of the robot according to the object detected. The speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. Three different algorithms have been tested in simulation and a practical mobile robot laboratory. The results show an average of 84% success rate for object recognition in experiments with the real mobile robot platform.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141667813 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra Carolina Hernandez ◽  
Jonathan Crespo ◽  
Ramon Barber

The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.


Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (simultaneous localization and mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This chapter is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (1) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (2) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


1999 ◽  
Vol 11 (1) ◽  
pp. 39-44 ◽  
Author(s):  
Motoji Yamamoto ◽  
◽  
Nobuhiro Ushimi ◽  
Akira Mohri

Sensor-based navigation used a target direction sensor for mobile robots among unknown obstacles in work space is discussed. The advantage of target direction information is robustness of measurement error for online navigation, compared to robot location information. Convergence of navigation using target direction information is discussed. An actual sensor system using two CdS sensors to measure target direction is proposed. Using target direction information, we present a new sensor based navigation algorithm in unknown obstacle environment. The navigation algorithm is based on target direction information, unlike sensor-based motion planning algorithms based on mobile robot location information. Using a sensor-based navigation system, we conducted a navigation experiment and simulations in unknown obstacle environment.


2018 ◽  
Vol 6 (2) ◽  
pp. 47
Author(s):  
Muhammad Hafidz Fazli Md Fauadi ◽  
Suriati Akmal ◽  
Mahasan Mat Ali ◽  
Nurul Izah Anuar ◽  
Samad Ramlan ◽  
...  

1999 ◽  
Vol 11 (1) ◽  
pp. 45-53 ◽  
Author(s):  
Shinji Kotani ◽  
◽  
Ken’ichi Kaneko ◽  
Tatsuya Shinoda ◽  
Hideo Mori ◽  
...  

This paper describes a navigation system for an autonomous mobile robot in outdoors. The robot uses vision to detect landmarks and DGPS information to determine its initial position and orientation. The vision system detects landmarks in the environment by referring to an environmental model. As the robot moves, it calculates its position by conventional dead reckoning, and matches landmarks to the environmental model to reduce error in position calculation. The robot's initial position and orientation are calculated from coordinates of the first and second locations acquired by DGPS. Subsequent orientations and positions are derived by map matching. We implemented the system on a mobile robot, Harunobu 6. Experiments in real environments verified the effectiveness of our proposed navigation.


Sign in / Sign up

Export Citation Format

Share Document