scholarly journals A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique

Entropy ◽  
2017 ◽  
Vol 19 (7) ◽  
pp. 365 ◽  
Author(s):  
Xiong Luo ◽  
Jing Deng ◽  
Weiping Wang ◽  
Jenq-Haur Wang ◽  
Wenbing Zhao
Author(s):  
Zizhuo Meng ◽  
Jie Xu ◽  
Zhidong Li ◽  
Yang Wang ◽  
Fang Chen ◽  
...  

1999 ◽  
Vol 11 (8) ◽  
pp. 2017-2060 ◽  
Author(s):  
Csaba Szepesvári ◽  
Michael L. Littman

Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of such value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the convergence of a complex asynchronous reinforcement-learning algorithm to be proved by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multistate updates, Q-learning for Markov games, and risk-sensitive reinforcement learning.


2018 ◽  
Vol 112 ◽  
pp. 111-117 ◽  
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Linlin Li ◽  
Hongqiao Wang ◽  
Yongqiang Li

2005 ◽  
Vol 24 ◽  
pp. 81-108 ◽  
Author(s):  
P. Geibel ◽  
F. Wysotzki

In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.


2018 ◽  
Vol 23 (11) ◽  
pp. 3697-3706
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Linlin Li ◽  
Shuai Huang

Sign in / Sign up

Export Citation Format

Share Document