Experimental evaluation of new navigator of mobile robot using fuzzy Q-learning

Author(s):  
Fadhila Lachekhab ◽  
Mohamed Tadjine ◽  
Mohamed Kesraoui
2012 ◽  
Vol 51 (9) ◽  
pp. 40-46 ◽  
Author(s):  
Pradipta KDas ◽  
S. C. Mandhata ◽  
H. S. Behera ◽  
S. N. Patro

2016 ◽  
Vol 16 (4) ◽  
pp. 113-125
Author(s):  
Jianxian Cai ◽  
Xiaogang Ruan ◽  
Pengxuan Li

Abstract An autonomous path-planning strategy based on Skinner operant conditioning principle and reinforcement learning principle is developed in this paper. The core strategies are the use of tendency cell and cognitive learning cell, which simulate bionic orientation and asymptotic learning ability. Cognitive learning cell is designed on the base of Boltzmann machine and improved Q-Learning algorithm, which executes operant action learning function to approximate the operative part of robot system. The tendency cell adjusts network weights by the use of information entropy to evaluate the function of operate action. The results of the simulation experiment in mobile robot showed that the designed autonomous path-planning strategy lets the robot realize autonomous navigation path planning. The robot learns to select autonomously according to the bionic orientate action and have fast convergence rate and higher adaptability.


Robotics ◽  
2010 ◽  
Author(s):  
H. Wicaksono ◽  
K. Anam ◽  
P. Hastono ◽  
I.A. Sulistijono ◽  
S. Kuswadi

2018 ◽  
Vol 11 (1) ◽  
pp. 146-157 ◽  
Author(s):  
Akash Dutt Dubey ◽  
Ravi Bhushan Mishra

In this article, we have applied cognition on robot using Q-learning based situation operator model. The situation operator model takes the initial situation of the mobile robot and applies a set of operators in order to move the robot to the destination. The initial situation of the mobile robot is defined by a set of characteristics inferred by the sensor inputs. The Situation-Operator Model (SOM) model comprises of a planning and learning module which uses certain heuristics for learning through the mobile robot and a knowledge base which stored the experiences of the mobile robot. The control and learning of the robot is done using q-learning. A camera sensor and an ultrasonic sensor were used as the sensory inputs for the mobile robot. These sensory inputs are used to define the initial situation, which is then used in the learning module to apply the valid operator. The results obtained by the proposed method were compared to the result obtained by Reinforcement-Based Artificial Neural Network for path planning.


Sign in / Sign up

Export Citation Format

Share Document