scholarly journals Maximum Principle for Forward-Backward Control System Driven by Itô-Lévy Processes under Initial-Terminal Constraints

2017 ◽  
Vol 2017 ◽  
pp. 1-13
Author(s):  
Meijuan Liu ◽  
Xiangrong Wang ◽  
Hong Huang

This paper investigates a stochastic optimal control problem where the control system is driven by Itô-Lévy process. We prove the necessary condition about existence of optimal control for stochastic system by using traditional variational technique under the assumption that control domain is convex. We require that forward-backward stochastic differential equations (FBSDE) be fully coupled, and the control variable is allowed to enter both diffusion and jump coefficient. Moreover, we also require that the initial-terminal state be constrained. Finally, as an application to finance, we show an example of recursive consumption utility optimization problem to illustrate the practicability of our result.

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Hong Huang ◽  
Xiangrong Wang ◽  
Ying Li

This paper analyzes one kind of optimal control problem which is described by forward-backward stochastic differential equations with Lévy process (FBSDEL). We derive a necessary condition for the existence of the optimal control by means of spike variational technique, while the control domain is not necessarily convex. Simultaneously, we also get the maximum principle for this control system when there are some initial and terminal state constraints. Finally, a financial example is discussed to illustrate the application of our result.


Author(s):  
K. L. Teo ◽  
K. H. Wong ◽  
Z. S. Wu

A class of convex optimal control problems involving linear hereditary systems with linear control constraints and nonlinear terminal constraints is considered. A result on the existence of an optimal control is proved and a necessary condition for optimality is given. An iterative algorithm is presented for solving the optimal control problem under consideration. The convergence property of the algorithm is also investigated. To test the algorithm, an example is solved.


Mathematics ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 1192 ◽  
Author(s):  
Fauzi Mohamed Yusof ◽  
Farah Aini Abdullah ◽  
Ahmad Izani Md. Ismail

In this paper, optimal control theory is applied to a system of ordinary differential equations representing a hantavirus infection in rodent and alien populations. The effect of the optimal control in eliminating the rodent population that caused the hantavirus infection is investigated. In addition, Pontryagin’s maximum principle is used to obtain the necessary condition for the controls to be optimal. The Runge–Kutta method is then used to solve the proposed optimal control system. The findings from the optimal control problem suggest that the infection may be eradicated by implementing some controls for a certain period of time. This research concludes that the optimal control mathematical model is an effective method in reducing the number of infectious in a community and environment.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Guiling Li ◽  
Weihai Zhang

This paper studies the indefinite stochastic linear quadratic (LQ) optimal control problem with an inequality constraint for the terminal state. Firstly, we prove a generalized Karush-Kuhn-Tucker (KKT) theorem under hybrid constraints. Secondly, a new type of generalized Riccati equations is obtained, based on which a necessary condition (it is also a sufficient condition under stronger assumptions) for the existence of an optimal linear state feedback control is given by means of KKT theorem. Finally, we design a dynamic programming algorithm to solve the constrained indefinite stochastic LQ issue.


Filomat ◽  
2016 ◽  
Vol 30 (3) ◽  
pp. 711-720
Author(s):  
Charkaz Aghayeva

This paper concerns the stochastic optimal control problem of switching systems with delay. The evolution of the system is governed by the collection of stochastic delay differential equations with initial conditions that depend on its previous state. The restriction on the system is defined by the functional constraint that contains state and time parameters. First, maximum principle for stochastic control problem of delay switching system without constraint is established. Finally, using Ekeland?s variational principle, the necessary condition of optimality for control system with constraint is obtained.


2012 ◽  
Vol 2012 ◽  
pp. 1-29 ◽  
Author(s):  
Shaolin Ji ◽  
Qingmeng Wei ◽  
Xiumin Zhang

We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.


2000 ◽  
Vol 123 (3) ◽  
pp. 518-527 ◽  
Author(s):  
Yongcai Xu ◽  
Masami Iwase ◽  
Katsuhisa Furuta

Swing-up of a rotating type pendulum from the pendant to the inverted state is known to be one of most difficult control problems, since the system is nonlinear, underactuated, and has uncontrollable states. This paper studies a time optimal swing-up control of the pendulum using bounded input. Time optimal control of a nonlinear system can be formulated by Pontryagin’s Maximum Principle, which is, however, hard to compute practically. In this paper, a new computational approach is presented to attain a numerical solution of the time optimal swing-up problem. Time optimal control problem is described as minimization of the achievable time to attain the terminal state under the bounded input amplitude, although algorithms to solve this problem are known to be complicated. Therefore, in this paper, it is shown how the optimal time swing-up control is formulated as an auxiliary problem in that the minimal input amplitude is searched so that the terminal state satisfies a specification at a given time. Through the proposed approach, time optimal control can be solved by nonlinear optimization. Its approach is evaluated by numerical simulations of a simplified pendulum model, is checked satisfying the necessary condition of Maximum Principle, and is experimentally verified using the rotating type pendulum.


2021 ◽  
Vol 6 (3) ◽  
pp. 213
Author(s):  
Jian Song ◽  
Meng Wang

<p style='text-indent:20px;'>We consider the stochastic optimal control problem for the dynamical system of the stochastic differential equation driven by a local martingale with a spatial parameter. Assuming the convexity of the control domain, we obtain the stochastic maximum principle as the necessary condition for an optimal control, and we also prove its sufficiency under proper conditions. The stochastic linear quadratic problem in this setting is also discussed.</p>


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Yuefen Chen ◽  
Minghai Yang

Uncertainty theory is a branch of mathematics for modeling human uncertainty based on the normality, duality, subadditivity, and product axioms. This paper studies a discrete-time LQ optimal control with terminal state constraint, whereas the weighting matrices in the cost function are indefinite and the system states are disturbed by uncertain noises. We first transform the uncertain LQ problem into an equivalent deterministic LQ problem. Then, the main result given in this paper is the necessary condition for the constrained indefinite LQ optimal control problem by means of the Lagrangian multiplier method. Moreover, in order to guarantee the well-posedness of the indefinite LQ problem and the existence of an optimal control, a sufficient condition is presented in the paper. Finally, a numerical example is presented at the end of the paper.


2020 ◽  
Vol 55 ◽  
pp. 33-41
Author(s):  
A.R. Danilin ◽  
A.A. Shaburov

The paper deals with the problem of optimal control with a Boltz-type quality index over a finite time interval for a linear steady-state control system in the class of piecewise continuous controls with smooth control constraints. In particular, we study the problem of controlling the motion of a system of small mass points under the action of a bounded force. The terminal part of the convex integral quality index additively depends on slow and fast variables, and the integral term is a strictly convex function of control variable. If the system is completely controllable, then the Pontryagin maximum principle is a necessary and sufficient condition for optimality. The main difference between this study and previous works is that the equation contains the zero matrix of fast variables and, thus, the results of A.B. Vasilieva on the asymptotic of the fundamental matrix of a control system are not valid. However, the linear steady-state system satisfies the condition of complete controllability. The article shows that problems of optimal control with a convex integral quality index are more regular than time-optimal problems.


Sign in / Sign up

Export Citation Format

Share Document