scholarly journals Stochastic filtering and optimal control of pure jump Markov processes with noise-free partial observation

2020 ◽  
Vol 26 ◽  
pp. 25
Author(s):  
Alessandro Calvia

We consider an infinite horizon optimal control problem for a pure jump Markov process X, taking values in a complete and separable metric space I, with noise-free partial observation. The observation process is defined as Yt = h(Xt), t ≥ 0, where h is a given map defined on I. The observation is noise-free in the sense that the only source of randomness is the process X itself. The aim is to minimize a discounted cost functional. In the first part of the paper we write down an explicit filtering equation and characterize the filtering process as a Piecewise Deterministic Process. In the second part, after transforming the original control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we prove the equivalence of the original and separated problems through an explicit formula linking their respective value functions. The value function of the separated problem is also characterized as the unique fixed point of a suitably defined contraction mapping.

2018 ◽  
Vol 24 (2) ◽  
pp. 873-899 ◽  
Author(s):  
Mingshang Hu ◽  
Falei Wang

The present paper considers a stochastic optimal control problem, in which the cost function is defined through a backward stochastic differential equation with infinite horizon driven by G-Brownian motion. Then we study the regularities of the value function and establish the dynamic programming principle. Moreover, we prove that the value function is the unique viscosity solution of the related Hamilton−Jacobi−Bellman−Isaacs (HJBI) equation.


2018 ◽  
Vol 24 (1) ◽  
pp. 311-354 ◽  
Author(s):  
Elena Bandini

We consider an infinite-horizon discounted optimal control problem for piecewise deterministic Markov processes, where a piecewise open-loop control acts continuously on the jump dynamics and on the deterministic flow. For this class of control problems, the value function can in general be characterized as the unique viscosity solution to the corresponding Hamilton−Jacobi−Bellman equation. We prove that the value function can be represented by means of a backward stochastic differential equation (BSDE) on infinite horizon, driven by a random measure and with a sign constraint on its martingale part, for which we give existence and uniqueness results. This probabilistic representation is known as nonlinear Feynman−Kac formula. Finally we show that the constrained BSDE is related to an auxiliary dominated control problem, whose value function coincides with the value function of the original non-dominated control problem.


2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Chao Liu ◽  
Shengjing Tang ◽  
Jie Guo

The intrinsic infinite horizon optimal control problem of mechanical systems on Lie group is investigated. The geometric optimal control problem is built on the intrinsic coordinate-free model, which is provided with Levi-Civita connection. In order to obtain an analytical solution of the optimal problem in the geometric viewpoint, a simplified nominal system on Lie group with an extra feedback loop is presented. With geodesic distance and Riemann metric on Lie group integrated into the cost function, a dynamic programming approach is employed and an analytical solution of the optimal problem on Lie group is obtained via the Hamilton-Jacobi-Bellman equation. For a special case on SO(3), the intrinsic optimal control method is used for a quadrotor rotation control problem and simulation results are provided to show the control performance.


Sign in / Sign up

Export Citation Format

Share Document