Robust mobile robot velocity estimation using redundant number of optical mice

Author(s):  
Sungbok Kim ◽  
Sanghyup Lee
2014 ◽  
Vol 538 ◽  
pp. 375-378 ◽  
Author(s):  
Xi Yuan Chen ◽  
Jing Peng Gao ◽  
Yuan Xu ◽  
Qing Hua Li

This paper proposed a new algorithm for optical flow-based monocular vision (MV)/ inertial navigation system (INS) integrated navigation. In this mode, a downward-looking camera is used to get the image sequences, which is used to estimate the velocity of the mobile robot by using optical flow algorithm. INS is employed for the yaw variation. In order to evaluate the performance of the proposed method, a real indoor test has done. The result shows that the proposed method has good performance for velocity estimation. It can be applied to the autonomous navigation of mobile robots when the Global Positioning System (GPS) and code wheel is unavailable.


2014 ◽  
Vol 592-594 ◽  
pp. 2215-2219
Author(s):  
D. Elayaraja ◽  
R. Ramesh ◽  
S. Ramabalan

It is proposed to determine the velocity of the embedded mobile robot in a real world test environment .The test environment considered in this work is the man-made road surfaces like cement road surface, sand road surface, Bituminous Thar road surface, Grass road surface and loose gravel road surfaces etc. First, fuzzy logic control of velocity estimation a mobile robot is done using Matlab for the different surfaces. Then the real time tests on the different surfaces were carried out. The simulated values are compared with the test values. The comparison showed that the simulation values were close to the real time test values.


2008 ◽  
Vol 05 (04) ◽  
pp. 321-330 ◽  
Author(s):  
SUNGBOK KIM ◽  
SANGHYUP LEE

This paper presents the robust velocity estimation of a mobile robot using a polygonal array of optical mice that are installed at the bottom of the mobile robot. First, the velocity kinematics from a mobile robot to an array of optical mice is derived, from which the least squares estimation of a mobile robot velocity is obtained. Second, the least squares mobile robot velocity estimation is shown to be robust against measurement noises and partial malfunctions of optical mice. Third, in the presence of installation error, a practical method for optical mouse position calibration is devised. Finally, some experimental results are given to demonstrate the validity and performance of the proposed mobile robot velocity estimation.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1363
Author(s):  
Hailuo Song ◽  
Ao Li ◽  
Tong Wang ◽  
Minghui Wang

It is an essential capability of indoor mobile robots to avoid various kinds of obstacles. Recently, multimodal deep reinforcement learning (DRL) methods have demonstrated great capability for learning control policies in robotics by using different sensors. However, due to the complexity of indoor environment and the heterogeneity of different sensor modalities, it remains an open challenge to obtain reliable and robust multimodal information for obstacle avoidance. In this work, we propose a novel multimodal DRL method with auxiliary task (MDRLAT) for obstacle avoidance of indoor mobile robot. In MDRLAT, a powerful bilinear fusion module is proposed to fully capture the complementary information from two-dimensional (2D) laser range findings and depth images, and the generated multimodal representation is subsequently fed into dueling double deep Q-network to output control commands for mobile robot. In addition, an auxiliary task of velocity estimation is introduced to further improve representation learning in DRL. Experimental results show that MDRLAT achieves remarkable performance in terms of average accumulated reward, convergence speed, and success rate. Moreover, experiments in both virtual and real-world testing environments further demonstrate the outstanding generalization capability of our method.


Sign in / Sign up

Export Citation Format

Share Document