Rapid ascent trajectory optimization for guided rockets via sequential convex programming

Author(s):  
Kai Zhang ◽  
Shuxing Yang ◽  
Fenfen Xiong

A sequential convex programming algorithm is proposed to solve the complex ascent trajectory optimization problems for guided rockets in this paper. Due to the nonlinear dynamics and constraints, especially, the nonlinear thrust terms and aerodynamic drag, ascent trajectory optimization problems for guided rockets are always difficult to be solved rapidly. In this paper, first, the complex thrust terms in the dynamic equation are approximately transformed into linear (convex) functions of the angle of attack. Secondly, the nonlinear drag coefficient is transformed into a linear (convex) function of design variables by introducing two new control variables. The relaxation technique is used to relax the constraints between the control variables to avoid non- convexity, and the accuracy of the relaxation is proved using the optimal control theory. Then, nonconvex objective functions and dynamical equations are convexified by first-order Taylor expansions. At last, a sequential convex programming iterative algorithm is proposed to solve the ascent trajectory planning problem accurately and rapidly. The ascent trajectory optimization problem for the terminal velocity maximum is simulated comparing with the general pseudospectral optimal control software method, which demonstrates the effectiveness and rapidity of the proposed method.

2018 ◽  
Vol 29 (3) ◽  
pp. 318-327 ◽  
Author(s):  
Guilherme Matiussi Ramalho ◽  
Sidney Roberto Carvalho ◽  
Erlon Cristian Finardi ◽  
Ubirajara Franco Moreno

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1120
Author(s):  
Tom Lefebvre ◽  
Guillaume Crevecoeur

In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) problems. This class is unique in the sense that it can be solved explicitly yielding a formal optimal state trajectory distribution. In this contribution, we first review the PIC theory and discuss related algorithms tailored to policy search in general. We are able to identify a generic design strategy that relies on the existence of an optimal state trajectory distribution and finds a parametric policy by minimizing the cross-entropy between the optimal and a state trajectory distribution parametrized by a parametric stochastic policy. Inspired by this observation, we then aim to formulate a SOC problem that shares traits with the LSOC setting yet that covers a less restrictive class of problem formulations. We refer to this SOC problem as Entropy Regularized Trajectory Optimization. The problem is closely related to the Entropy Regularized Stochastic Optimal Control setting which is often addressed lately by the Reinforcement Learning (RL) community. We analyze the theoretical convergence behavior of the theoretical state trajectory distribution sequence and draw connections with stochastic search methods tailored to classic optimization problems. Finally we derive explicit updates and compare the implied Entropy Regularized PIC with earlier work in the context of both PIC and RL for derivative-free trajectory optimization.


Sign in / Sign up

Export Citation Format

Share Document