A pulse neural network reinforcement learning algorithm for partially observable Markov decision processes

2005 ◽  
Vol 36 (3) ◽  
pp. 42-52 ◽  
Author(s):  
Koichiro Takita ◽  
Masafumi Hagiwara
2021 ◽  
Vol 3 (3) ◽  
pp. 554-581
Author(s):  
Xuanchen Xiang ◽  
Simon Foo

The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems. Reinforcement Learning (RL) is an approach to simulate the human’s natural learning process, whose key is to let the agent learn by interacting with the stochastic environment. The fact that the agent has limited access to the information of the environment enables AI to be applied efficiently in most fields that require self-learning. Although efficient algorithms are being widely used, it seems essential to have an organized investigation—we can make good comparisons and choose the best structures or algorithms when applying DRL in various applications. In this overview, we introduce Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language processing. A follow-up paper will cover applications in transportation, communications and networking, and industries.


Author(s):  
Angelo Encapera ◽  
Abhijit Gosavi

Artificial intelligence techniques can play a significant role in solving problems encountered in the domain of Total Productive Maintenance (TPM). This paper considers a new reinforcement learning algorithm called iSMART, which can solve semi-Markov decision processes underlying control problems related to TPM. The algorithm uses a constant exploration rate, unlike its precursor R-SMART, which required exploration decay. Numerical experiments conducted here show encouraging behavior with the new algorithm.


Sign in / Sign up

Export Citation Format

Share Document