Determining the Position of a Mobile Robot Using an Omnidirectional Vision System

2001 ◽  
Vol 34 (9) ◽  
pp. 339-344
Author(s):  
Jun-ichi Takiguchi ◽  
Akito Takeya ◽  
Ken'ichi Nishiguchi ◽  
Hiroshi Yano ◽  
Makoto Iyodam ◽  
...  
2021 ◽  
Vol 11 (8) ◽  
pp. 3360
Author(s):  
Huei-Yung Lin ◽  
Chien-Hsing He

This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.


Author(s):  
Yoichiro Maeda ◽  
◽  
Wataru Shimizuhira ◽  

We propose a multiple omnidirectional vision system (MOVIS) with three omnidirectional cameras and calculation for measuring an object position and localization in an autonomous mobile robot. In identifying the robot’s location, we improved measurement accuracy by correcting the absolute location based on landmark measurement error in the origin of absolute coordinates. We propose omnidirectional behavior control for collision avoidance and object chasing using fuzzy reasoning in an autonomous mobile robot with MOVIS. We also report experimental results confirming the efficiency of our proposal using a RoboCup soccer robot in a dynamic environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Bin Tan

With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.


Author(s):  
Gamma Aditya Rahardi ◽  
Khairul Anam ◽  
Ali Rizal Chaidir ◽  
Devita Ayu Larasati

Sign in / Sign up

Export Citation Format

Share Document