2P1-K07 Outdoor Scene Recognition by 3D Range Sensor for Mobile Robots(Vision System for Mobile Robot)

2012 ◽  
Vol 2012 (0) ◽  
pp. _2P1-K07_1-_2P1-K07_4
Author(s):  
Koki YACHINAKA ◽  
Jun MIURA ◽  
Junji SATAKE
2005 ◽  
Vol 17 (2) ◽  
pp. 116-120 ◽  
Author(s):  
Hirohiko Kawata ◽  
◽  
Toshihiro Mori ◽  
Shin’ichi Yuta ◽  

We developed a 2-D laser range sensor suitable for different mobile robot platform sizes. The sensor features compactness, lightweight, high precision and low power consumption and has wide scan angle with high resolution essential for environment recognition in mobile robots. The principle applied to calculate the distance between the sensor and the object involves, applying amplitude modulation to the wave of light and detecting the phase difference between transmitted and received light. In this paper we explain the sensor specifications, the principle of distance measurement and experimental results.


SIMULATION ◽  
2019 ◽  
Vol 96 (2) ◽  
pp. 169-183
Author(s):  
Saumya R Sahoo ◽  
Shital S Chiddarwar

Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is used. The developed controller, the Kinect-based vision system, and the HIL configuration are validated in the HIL simulation-based environment. The results confirm that the proposed HIL system can be an efficient tool for verifying the performance of the hardware and simulation designs of flatness-based control systems for omnidirectional mobile robots.


1999 ◽  
Vol 11 (1) ◽  
pp. 45-53 ◽  
Author(s):  
Shinji Kotani ◽  
◽  
Ken’ichi Kaneko ◽  
Tatsuya Shinoda ◽  
Hideo Mori ◽  
...  

This paper describes a navigation system for an autonomous mobile robot in outdoors. The robot uses vision to detect landmarks and DGPS information to determine its initial position and orientation. The vision system detects landmarks in the environment by referring to an environmental model. As the robot moves, it calculates its position by conventional dead reckoning, and matches landmarks to the environmental model to reduce error in position calculation. The robot's initial position and orientation are calculated from coordinates of the first and second locations acquired by DGPS. Subsequent orientations and positions are derived by map matching. We implemented the system on a mobile robot, Harunobu 6. Experiments in real environments verified the effectiveness of our proposed navigation.


2020 ◽  
Vol 33 (02) ◽  
pp. 651-671
Author(s):  
Alexey I. Martyshkin

This study is devoted to the challenges of motion planning for mobile robots with smart machine vision systems. Motion planning for mobile robots in the environment with obstacles is a problem to deal with when creating robots suitable for operation in real-world conditions. The solutions found today are predominantly private, and are highly specialized, which prevents judging of how successful they are in solving the problem of effective motion planning. Solutions with a narrow application field already exist and are being already developed for a long time, however, no major breakthrough has been observed yet. Only a systematic improvement in the characteristics of such systems can be noted. The purpose of this study: develop and investigate a motion planning algorithm for a mobile robot with a smart machine vision system. The research subject for this article is a motion planning algorithm for a mobile robot with a smart machine vision system. This study provides a review of domestic and foreign mobile robots that solve the motion planning problem in a known environment with unknown obstacles. The following navigation methods are considered for mobile robots: local, global, individual. In the course of work and research, a mobile robot prototype has been built, capable of recognizing obstacles of regular geometric shapes, as well as plan and correct the movement path. Environment objects are identified and classified as obstacles by means of digital image processing methods and algorithms. Distance to the obstacle and relative angle are calculated by photogrammetry methods, image quality is improved by linear contrast enhancement and optimal linear filtering using the Wiener-Hopf equation. Virtual tools, related to mobile robot motion algorithm testing, have been reviewed, which led us to selecting Webots software package for prototype testing. Testing results allowed us to make the following conclusions. The mobile robot has successfully identified the obstacle, planned a path in accordance with the obstacle avoidance algorithm, and continued moving to the destination. Conclusions have been drawn regarding the concluded research.


Author(s):  
Evangelos Georgiou ◽  
Jian S. Dai ◽  
Michael Luck

In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Bin Tan

With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.


2010 ◽  
Vol 7 ◽  
pp. 109-117
Author(s):  
O.V. Darintsev ◽  
A.B. Migranov ◽  
B.S. Yudintsev

The article deals with the development of a high-speed sensor system for a mobile robot, used in conjunction with an intelligent method of planning trajectories in conditions of high dynamism of the working space.


Sign in / Sign up

Export Citation Format

Share Document