On the synthesis problem for optimal control of a nonlinear stochastic observation process

1995 ◽  
Vol 38 (11) ◽  
pp. 792-799
Author(s):  
S. V. Sokolov
Author(s):  
V. N. Afanas’ev ◽  
V. B. Kolmanovskii ◽  
V. R. Nosov

Author(s):  
S.V. Konstantinov ◽  
A.I. Diveev

A new approach is considered to solving the problem of synthesizing an optimal control system based on the extremals' set approximation. At the first stage, the optimal control problem for various initial states out of a given domain is being numerically sold. Evolutionary algorithms are used to solve the optimal control problem numerically. At the second stage, the problem of approximating the found set of extremals by the method of symbolic regression is solved. Approach considered in the work makes it possible to eliminate the main drawback of the known approach to solving the control synthesis problem using the symbolic regression method, which consists in the fact that the genetic algorithm used in solving the synthesis problem does not provide information about proximity of the found solution to the optimal one. Here, control function is built on the basis of a set of extremals; therefore, any particular solution should be close to the optimal trajectory. Computational experiment is presented for solving the applied problem of synthesizing the four-wheel robot optimal control system in the presence of phase constraints. It is experimentally demonstrated that the synthesized control function makes it possible for any initial state from a given domain to obtain trajectories close to optimal in the quality functional. Initial states were considered during the experiment, both included in the approximating set of optimal trajectories and others from the same given domain. Approximation of the extremals set was carried out by the network operator method


2000 ◽  
Vol 39 (4) ◽  
pp. 1008-1042 ◽  
Author(s):  
R. Gabasov ◽  
F. M. Kirillova ◽  
N. V. Balashevich

2020 ◽  
Vol 26 ◽  
pp. 25
Author(s):  
Alessandro Calvia

We consider an infinite horizon optimal control problem for a pure jump Markov process X, taking values in a complete and separable metric space I, with noise-free partial observation. The observation process is defined as Yt = h(Xt), t ≥ 0, where h is a given map defined on I. The observation is noise-free in the sense that the only source of randomness is the process X itself. The aim is to minimize a discounted cost functional. In the first part of the paper we write down an explicit filtering equation and characterize the filtering process as a Piecewise Deterministic Process. In the second part, after transforming the original control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we prove the equivalence of the original and separated problems through an explicit formula linking their respective value functions. The value function of the separated problem is also characterized as the unique fixed point of a suitably defined contraction mapping.


Sign in / Sign up

Export Citation Format

Share Document