A vision system for mobile robot navigation

Robotica ◽  
1994 ◽  
Vol 12 (1) ◽  
pp. 77-89 ◽  
Author(s):  
M. Elarbi Boudihir ◽  
M. Dufaut ◽  
R. Husson

A new vision system architecture has been developed to support the visual navigation of an autonomous mobile robot. This robot is primarily intended for urban park inspection, so it should be able to move in a complex unstructured environment. The system consists of various modules each ensuring a specific task involved in autonomous navigation. Task coordination focuses on the central module called the supervisor which triggers each module at the time appropriate to the current situation of the robot. Most of the processing time is spent with the scene exploration module which is based on the Hough transform to extract the dominant straight features. This module operates in two modes: the initial phase which forms the type of processing applied to the first image acquired in order to initiate navigation, and the continuous following mode which ensures the processing of subsequent images taken at the end of the blind distance. In order to rely less on visual data, a detailed map of the environment has been established, and an algorithm is used to make a scene prediction based on robot position provided by the localization system. The predicted scene is used to validate the objects detected by the knowledge base. This knowledge base uses the acquired and predicted data to construct a scene model which is the main element of the vision system.

2017 ◽  
Vol 2 (4) ◽  
pp. 207-217
Author(s):  
Chaima BENSACI ◽  
Youcef ZENNIR ◽  
Denis POMORSKI

In this paper, we present our navigation control approach of a mobile robot (Turtlebot 2 robot) based on the stability Lyapunov function; our mobile robot is composed of two differential wheels. The kinematic model of the robot is presented followed by the description of the control approach. A 3D simulation under the Gazebo software is developed in interaction with the kinematic model and the control approach under MATLAB-SIMULINK software. The purpose of this study is to carry out an autonomous navigation; we initially planned different trajectories then we tried to be followed them by the robot. Our navigation strategy based on its odometry information, based on robot position and orientation errors; Velocity commands are sent for the robot to follow the chosen path. Different simulations were performed in 2D and 3D and the results obtained are presented followed by the envisaged future work.


1991 ◽  
Vol 3 (5) ◽  
pp. 373-378 ◽  
Author(s):  
Kiyoshi Komoriya ◽  
◽  
Kazuo Tani

External sensors which can detect environmental information are important for a mobile robot to recognize its surroundings and location. Among external sensors, range sensors are fundamental because they can directly detect the free space in which the mobile robot can move without colliding with the surrounding objects. A laser range sensor provides good spatial resolution, and it is expected to detect characteristic parts of the environment used as landmarks for recognizing robot position. This paper presents the construction of a laser range sensor system which can be implemented in a small mobile robot. The system consists of several components including laser diode, CCD camera, and mark detection hardware. Based on triangulation method, the system can detect the distance to the object's surface on which the beam spot is directed. In order to detect a landmark, such as a wall edge, the sensor system is mounted on a rotary table. By horizontally scanning, the sensor can detect wall edges with an accuracy of approximately 5mm and an orientation accuracy of approximately 1 degree within 3m. This system has been installed in an indoor mobile robot and is used for autonomous navigation control along corridors.


2014 ◽  
Vol 26 (2) ◽  
pp. 214-224 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Mitsunori Kitamura ◽  
Yoshiharu Amano ◽  
Nobuaki Kubo ◽  
...  

This paper describes the development of a mobile robot system and an outdoor navigationmethod based on global navigation satellite system (GNSS) in an autonomous mobile robot navigation challenge, called the Tsukuba Challenge, held in Tsukuba, Japan, in 2011 and 2012. The Tsukuba Challenge promotes practical technologies for autonomous mobile robots working in ordinary pedestrian environments. Many teams taking part in the Tsukuba Challenge used laser scanners to determine robot positions. GNSS was not used in localization because its positioning has multipath errors and problems in availability. We propose a technique for realizing multipath mitigation that uses an omnidirectional IR camera to exclude “invisible” satellites, i.e., those entirely obstructed by a building and whose direct waves therefore are not received. We applied GPS / dead reckoning (DR) integrated based on observation data from visible satellites determined by the IR camera. Positioning was evaluated during Tsukuba Challenge 2011 and 2012. Our robot ran the 1.4 km course autonomously and evaluation results confirmed the effectiveness of our proposed technique and the feasibility of its highly accurate positioning.


2021 ◽  
Author(s):  
Jing Li ◽  
Jialin Yin ◽  
Lin Deng

Abstract In the development of modern agriculture, the intelligent use of mechanical equipment is one of the main signs for agricultural modernization. Navigation technology is the key technology for agricultural machinery to control autonomously in operating environment, and it is a hotspot in the field of intelligent research on agricultural machinery. Facing the accuracy requirements of autonomous navigation for intelligent agricultural robots, this paper proposes a visual navigation algorithm for agricultural robots based on deep learning image understanding. The method first uses cascaded deep convolutional network and hybrid dilated convolution fusion method to process images collected by vision system. Then it extracts the route of processed images based on improved Hough transform algorithm. At the same time, the posture of agricultural robots is adjusted to realize autonomous navigation. Finally, our proposed method is verified by using non-interference experimental scenes and noisy experimental scenes. Experimental results show that the method can perform autonomous navigation in complex and noisy environments, and has good practicability and applicability.


2014 ◽  
Vol 1016 ◽  
pp. 700-704
Author(s):  
Vladimir Popov

Investigation of symbolic representations of environments plays an important role for solution of various problems of robot visual navigation. In this paper, we study methods of symbolic trajectory description for mobile robot navigation. For this purpose, we use the fresco approach. We consider the problem of salient frescoes selection. In particular, we consider various modifications of the Levenshtein distance method. Also, we use different circular strings methods.


2013 ◽  
Vol 3 (1) ◽  
pp. 4
Author(s):  
Muhammad Safwan ◽  
Muhammad Yasir Zaheen ◽  
M. Anwar Ahmed ◽  
Muhammad Shujaat Kamal ◽  
Raj Kumar

Bio-Mimetic Vision System (BMVS) for AutonomousMobile Robot Navigation encompasses three major fields, namelyrobotics, navigation and obstacle avoidance. Bio-mimetic vision isbased on stereo vision. Summation of Absolute Difference (SAD)is applied on the images from the two cameras and disparity mapis generated which is then used to navigate and avoid obstacles.Camera calibration and SAD is applied on Matlab software.AT89C52 microcontroller, along with Matlab, is used to efficientlycontrol the DC motors mounted on the robot frame. It is observedfrom experimental results that the developed system effectivelydistinguishes objects at different distances and avoids them whenthe path is blocked.


1999 ◽  
Vol 17 (7) ◽  
pp. 1009-1016 ◽  
Author(s):  
Takushi Sogo ◽  
Katsumi Kimoto ◽  
Hiroshi Ishiguro ◽  
Toru Ishida

Sign in / Sign up

Export Citation Format

Share Document