scholarly journals State Space Formulas for a Suboptimal Rational Leech Problem I: Maximum Entropy Solution

2014 ◽  
Vol 79 (4) ◽  
pp. 533-553 ◽  
Author(s):  
A. E. Frazho ◽  
S. ter Horst ◽  
M. A. Kaashoek
2011 ◽  
Vol 56 (9) ◽  
pp. 1999-2012 ◽  
Author(s):  
Francesca P. Carli ◽  
Augusto Ferrante ◽  
Michele Pavon ◽  
Giorgio Picci

2003 ◽  
Vol 125 (6) ◽  
pp. 1197-1205 ◽  
Author(s):  
Sun Kyoung Kim ◽  
Woo Il Lee

A solution scheme based on the maximum entropy method (MEM) for the solution of two-dimensional inverse heat conduction problems is established. MEM finds the solution which maximizes the entropy functional under the given temperature measurements. The proposed method converts the inverse problem to a nonlinear constrained optimization problem. The constraint of the optimization problem is the statistical consistency between the measured temperature and the estimated temperature. Successive quadratic programming (SQP) facilitates the numerical estimation of the maximum entropy solution. The characteristic feature of the proposed method is investigated with the sample numerical results. The presented results show considerable enhancement in resolution for stringent cases in comparison with a conventional method.


Author(s):  
S. F. Gull ◽  
A. K. Livesey ◽  
D. S. Sivia

2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Xi-liang Chen ◽  
Lei Cao ◽  
Zhi-xiong Xu ◽  
Jun Lai ◽  
Chen-xi Li

The assumption of IRL is that demonstrations are optimally acting in an environment. In the past, most of the work on IRL needed to calculate optimal policies for different reward functions. However, this requirement is difficult to satisfy in large or continuous state space tasks. Let alone continuous action space. We propose a continuous maximum entropy deep inverse reinforcement learning algorithm for continuous state space and continues action space, which realizes the depth cognition of the environment model by the way of reconstructing the reward function based on the demonstrations, and a hot start mechanism based on demonstrations to make the training process faster and better. We compare this new approach to well-known IRL algorithms using Maximum Entropy IRL, DDPG, hot start DDPG, etc. Empirical results on classical control environments on OpenAI Gym: MountainCarContinues-v0 show that our approach is able to learn policies faster and better.


Sign in / Sign up

Export Citation Format

Share Document