Numerical constructions of optimal feedback in models of chemotherapy of a malignant tumor

2019 ◽  
Vol 17 (01) ◽  
pp. 1940004 ◽  
Author(s):  
Natalia G. Novoselova

In this paper, a problem of chemotherapy of a malignant tumor is considered. Dynamics is piecewise monotone and a therapy function has two maxima. The aim of therapy is to minimize the number of tumor cells at the given final instance. The main result of this work is the construction of optimal feedbacks in the chemotherapy task. The construction of optimal feedback is based on the value function in the corresponding problem of optimal control (therapy). The value function is represented as a minimax generalized solution of the Hamilton–Jacobi–Bellman equation. It is proved that optimal feedback is a discontinuous function and the line of discontinuity satisfies the Rankin–Hugoniot conditions. Other results of the work are illustrative numerical examples of the construction of optimal feedbacks and Rankin–Hugoniot lines.

1984 ◽  
Vol 16 (1) ◽  
pp. 16-16
Author(s):  
Domokos Vermes

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).


2017 ◽  
Vol 49 (2) ◽  
pp. 515-548 ◽  
Author(s):  
Hansjörg Albrecher ◽  
Pablo Azcue ◽  
Nora Muler

Abstract We consider a two-dimensional optimal dividend problem in the context of two insurance companies with compound Poisson surplus processes, who collaborate by paying each other's deficit when possible. We study the stochastic control problem of maximizing the weighted sum of expected discounted dividend payments (among all admissible dividend strategies) until ruin of both companies, by extending results of univariate optimal control theory. In the case that the dividends paid by the two companies are equally weighted, the value function of this problem compares favorably with the one of merging the two companies completely. We identify the optimal value function as the smallest viscosity supersolution of the respective Hamilton–Jacobi–Bellman equation and provide an iterative approach to approximate it numerically. Curve strategies are identified as the natural analogue of barrier strategies in this two-dimensional context. A numerical example is given for which such a curve strategy is indeed optimal among all admissible dividend strategies, and for which this collaboration mechanism also outperforms the suitably weighted optimal dividend strategies of the two stand-alone companies.


2013 ◽  
Vol 50 (4) ◽  
pp. 1025-1043 ◽  
Author(s):  
Nicole Bäuerle ◽  
Zejing Li

We consider a multi asset financial market with stochastic volatility modeled by a Wishart process. This is an extension of the one-dimensional Heston model. Within this framework we study the problem of maximizing the expected utility of terminal wealth for power and logarithmic utility. We apply the usual stochastic control approach and obtain, explicitly, the optimal portfolio strategy and the value function in some parameter settings. In particular, we do this when the drift of the assets is a linear function of the volatility matrix. In this case the affine structure of the model can be exploited. In some cases we obtain a Feynman-Kac representation of the candidate value function. Though the approach we use is quite standard, the hard part is to identify when the solution of the Hamilton-Jacobi-Bellman equation is finite. This involves a couple of matrix analytic arguments. In a numerical study we discuss the influence of the investors' risk aversion on the hedging demand.


2019 ◽  
Vol 11 (3) ◽  
pp. 168781401983320
Author(s):  
Yan Li ◽  
Yuanchun Li

A novel framework of rapid exponential stability and optimal feedback control is investigated and analyzed for a class of nonlinear systems through a variant of continuous Lyapunov functions and Hamilton–Jacobi–Bellman equation. Rapid exponential stability means that the trajectories of nonlinear systems converge to equilibrium states in accelerated time. The sufficient conditions of rapid exponential stability are developed using continuous Lyapunov functions for nonlinear systems. Furthermore, according to a variant of continuous Lyapunov functions, a rapid exponential stability is guaranteed which satisfies some canonical conditions and Hamilton–Jacobi–Bellman equation for controlled nonlinear systems. It is can be seen that the solution of Hamilton–Jacobi–Bellman equation is a continuous Lyapunov function, and, therefore, rapid exponential stability and optimality are guaranteed for nonlinear systems. Last, the main result of this article is investigated via a nonlinear model of a spacecraft with one axis of symmetry through simulations and is used to check rapid exponential stability. Moreover, for the disturbance problem of initial point, a rapid exponential stable controller can reject the large-scale disturbances for controlled nonlinear systems. In addition, the proposed optimal feedback controller is applied to the tracking trajectories of 2-degree-of-freedom manipulator, and the numerical results have illustrated high efficiency and robustness in real time. The simulation results demonstrate the use of the rapid exponential stability and optimal feedback approach for real-time nonlinear systems.


Author(s):  
O. Alvarez

A quasilinear elliptic equation in ℝN of Hamilton-Jacobi-Bellman type is studied. An optimal criterion for uniqueness which involves only a lower bound on the functions is given. The unique solution in this class is identified as the value function of the associated stochastic control problem.


2019 ◽  
Vol 22 (02) ◽  
pp. 1850059 ◽  
Author(s):  
WESTON BARGER ◽  
MATTHEW LORIG

We assume a continuous-time price impact model similar to that of Almgren–Chriss but with the added assumption that the price impact parameters are stochastic processes modeled as correlated scalar Markov diffusions. In this setting, we develop trading strategies for a trader who desires to liquidate his inventory but faces price impact as a result of his trading. For a fixed trading horizon, we perform coefficient expansion on the Hamilton–Jacobi–Bellman (HJB) equation associated with the trader’s value function. The coefficient expansion yields a sequence of partial differential equations that we solve to give closed-form approximations to the value function and optimal liquidation strategy. We examine some special cases of the optimal liquidation problem and give financial interpretations of the approximate liquidation strategies in these cases. Finally, we provide numerical examples to demonstrate the effectiveness of the approximations.


2018 ◽  
Vol 6 (1) ◽  
pp. 85-96
Author(s):  
Delei Sheng ◽  
Linfang Xing

AbstractAn insurance-package is a combination being tie-in at least two different categories of insurances with different underwriting-yield-rate. In this paper, the optimal insurance-package and investment problem is investigated by maximizing the insurer’s exponential utility of terminal wealth to find the optimal combination-share and investment strategy. Using the methods of stochastic analysis and stochastic optimal control, the Hamilton-Jacobi-Bellman (HJB) equations are established, the optimal strategy and the value function are obtained in closed form. By comparing with classical results, it shows that the insurance-package can enhance the utility of terminal wealth, meanwhile, reduce the insurer’s claim risk.


2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Jun Moon

AbstractWe consider the optimal control problem for stochastic differential equations (SDEs) with random coefficients under the recursive-type objective functional captured by the backward SDE (BSDE). Due to the random coefficients, the associated Hamilton–Jacobi–Bellman (HJB) equation is a class of second-order stochastic PDEs (SPDEs) driven by Brownian motion, which we call the stochastic HJB (SHJB) equation. In addition, as we adopt the recursive-type objective functional, the drift term of the SHJB equation depends on the second component of its solution. These two generalizations cause several technical intricacies, which do not appear in the existing literature. We prove the dynamic programming principle (DPP) for the value function, for which unlike the existing literature we have to use the backward semigroup associated with the recursive-type objective functional. By the DPP, we are able to show the continuity of the value function. Using the Itô–Kunita’s formula, we prove the verification theorem, which constitutes a sufficient condition for optimality and characterizes the value function, provided that the smooth (classical) solution of the SHJB equation exists. In general, the smooth solution of the SHJB equation may not exist. Hence, we study the existence and uniqueness of the solution to the SHJB equation under two different weak solution concepts. First, we show, under appropriate assumptions, the existence and uniqueness of the weak solution via the Sobolev space technique, which requires converting the SHJB equation to a class of backward stochastic evolution equations. The second result is obtained under the notion of viscosity solutions, which is an extension of the classical one to the case for SPDEs. Using the DPP and the estimates of BSDEs, we prove that the value function is the viscosity solution to the SHJB equation. For applications, we consider the linear-quadratic problem, the utility maximization problem, and the European option pricing problem. Specifically, different from the existing literature, each problem is formulated by the generalized recursive-type objective functional and is subject to random coefficients. By applying the theoretical results of this paper, we obtain the explicit optimal solution for each problem in terms of the solution of the corresponding SHJB equation.


2003 ◽  
Vol 05 (02) ◽  
pp. 167-189 ◽  
Author(s):  
Ştefan Mirică

We give complete proofs to the verification theorems announced recently by the author for the "pairs of relatively optimal feedback strategies" of an autonomous differential game. These concepts are considered to describe the possibly optimal solutions of a differential game while the corresponding value functions are used as "instruments" for proving the relative optimality and also as "auxiliary characteristics" of the differential game. The 6 verification theorems in the paper are proved under different regularity assumptions accompanied by suitable differential inequalities verified by the generalized derivatives, mainly of contingent type, of the value function.


Sign in / Sign up

Export Citation Format

Share Document