scholarly journals Using Bayesian Network for Robot Behavior with Sensor Fault

1970 ◽  
Vol 108 (2) ◽  
pp. 91-96
Author(s):  
A. Rezaee ◽  
A. Raie ◽  
A. Nadi ◽  
S. Shiry

The paper discussed application of Bayesian network to learn behavior of mobile robot in presence of fault sensor. Theoretical and practical are considered for checking the results. Robot's model was considered as Bayesian model that each value of CPD was learned. This framework shows that can be work in real environment with noisy sensor. Ill. 12, bibl. 7, tabl. 2 (in English; abstracts in English and Lithuanian).http://dx.doi.org/10.5755/j01.eee.108.2.152

2011 ◽  
Vol 25 (16) ◽  
pp. 2039-2064 ◽  
Author(s):  
Alireza Rezaee ◽  
Abolghasem A. Raie ◽  
Abolfazl Nadi ◽  
Saeed Shiry Ghidary

This is the third chapter of the first section. It is a compendium of all the concepts and theorems of probability theory that are found in the problems of Bayesian estimation of a robot location and the map of its environment. It presents uncertainty as an intrinsic feature of any mobile robot that develops in a real environment. It is then discussed how uncertainty has been treated along the history of science and how probabilistic approaches have represented such a huge success in many engineering fields, including robotics. The fundamental concepts of probability theory are discussed along with some advanced topics needed in further chapters, following a learning curve as smooth and comprehensive as possible.


Author(s):  
Hikaru Sasaki ◽  
Tadashi Horiuchi ◽  
Satoru Kato ◽  
◽  
◽  
...  

Deep Q-network (DQN) is one of the most famous methods of deep reinforcement learning. DQN approximates the action-value function using Convolutional Neural Network (CNN) and updates it using Q-learning. In this study, we applied DQN to robot behavior learning in a simulation environment. We constructed the simulation environment for a two-wheeled mobile robot using the robot simulation software, Webots. The mobile robot acquired good behavior such as avoiding walls and moving along a center line by learning from high-dimensional visual information supplied as input data. We propose a method that reuses the best target network so far when the learning performance suddenly falls. Moreover, we incorporate Profit Sharing method into DQN in order to accelerate learning. Through the simulation experiment, we confirmed that our method is effective.


2010 ◽  
Vol 22 (3) ◽  
pp. 301-307 ◽  
Author(s):  
Takafumi Matsumaru ◽  
◽  
Yasutada Horiuchi ◽  
Kosuke Akai ◽  
Yuichi Ito

To expand use of the mobile robot Step-On Interface (SOI), originally targeting maintenance, training, and recovery of human physical and cognitive functions, we introduce a “Truly-Tender-Tailed” (T3, pronounced tee-cube) tag-playing robot as a “Friendly Amusing Mobile” (FAM) function. Displaying a previously prepared bitmap (BMP) image and speeding up display make it easy to design button placement and other screen parameters using a painting software package. The BMP-image scope matrix simplifies step detection and recognition and the motion trajectory design editor facilitates robot behavior design.


2012 ◽  
Vol 16 (4) ◽  
pp. 514-518 ◽  
Author(s):  
Tae Hyon Kim ◽  
Kiyohiro Goto ◽  
Hiroki Igarashi ◽  
Kazuyuki Kon ◽  
Noritaka Sato ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document