Path Integral Control on Lie Groups

Author(s):  
George I. Boutselis ◽  
Evangelos A. Theodorou
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 47353-47365 ◽  
Author(s):  
Chen Liang ◽  
Weihong Wang ◽  
Zhenghua Liu ◽  
Chao Lai ◽  
Benchun Zhou

Author(s):  
Vicenç Gómez ◽  
Hilbert J. Kappen ◽  
Jan Peters ◽  
Gerhard Neumann

2022 ◽  
Author(s):  
Matthew D. Houghton ◽  
Alexander B. Oshin ◽  
Michael J. Acheson ◽  
Evangelos A. Theodorou ◽  
Irene M. Gregory

2017 ◽  
Vol 40 (2) ◽  
pp. 344-357 ◽  
Author(s):  
Grady Williams ◽  
Andrew Aldrich ◽  
Evangelos A. Theodorou

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1120
Author(s):  
Tom Lefebvre ◽  
Guillaume Crevecoeur

In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) problems. This class is unique in the sense that it can be solved explicitly yielding a formal optimal state trajectory distribution. In this contribution, we first review the PIC theory and discuss related algorithms tailored to policy search in general. We are able to identify a generic design strategy that relies on the existence of an optimal state trajectory distribution and finds a parametric policy by minimizing the cross-entropy between the optimal and a state trajectory distribution parametrized by a parametric stochastic policy. Inspired by this observation, we then aim to formulate a SOC problem that shares traits with the LSOC setting yet that covers a less restrictive class of problem formulations. We refer to this SOC problem as Entropy Regularized Trajectory Optimization. The problem is closely related to the Entropy Regularized Stochastic Optimal Control setting which is often addressed lately by the Reinforcement Learning (RL) community. We analyze the theoretical convergence behavior of the theoretical state trajectory distribution sequence and draw connections with stochastic search methods tailored to classic optimization problems. Finally we derive explicit updates and compare the implied Entropy Regularized PIC with earlier work in the context of both PIC and RL for derivative-free trajectory optimization.


2021 ◽  
Author(s):  
Jintasit Pravitra ◽  
Evangelos Theodorou ◽  
Eric N. Johnson

Author(s):  
Grady Williams ◽  
Paul Drews ◽  
Brian Goldfain ◽  
James M. Rehg ◽  
Evangelos A. Theodorou

2015 ◽  
Vol 80 ◽  
pp. 9-15 ◽  
Author(s):  
Zhifei Zhang ◽  
Alain Sarlette ◽  
Zhihao Ling
Keyword(s):  

Author(s):  
Ermano Arruda ◽  
Michael J. Mathew ◽  
Marek Kopicki ◽  
Michael Mistry ◽  
Morteza Azad ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document