Optimal control problem for an equation of filtration with memory

Author(s):  
Mykola Krasnoshchok

Fractional diffusion models are generalization to the diffusion models with integer derivatives. There has been great interest in the study of this models because of their appearance in modeling various applications in the physical sciences, medicine and biology. We consider a filtration model with nonclassical Darcy's constitutive equation. Resulting equation states that the flux of fluid is proportional to not only gradient pressure but it's Riemann-Liouville fractional derivative also. This model was proposed by M. Caputo and allows the permeability varies with time depending on the previous pressure gradient. These phenomena, which we will represent mathematically with memory formalisms, have often been observed qualitatively in oil extraction, in geothermal areas and in the laboratory Similar problems arises in the study of flow of generalized second grade fluid. Existence results of initial and boundary value problems for partial fractional differential equations have been studied by E. Bazhlekova, K. Diethelm, J. Janno, A.N. Kochubei, G.P. Lopushans'ka, R. Zacher and others. Fractional optimal control problems have attracted for example R.Dorville, G.M. Mophou, V.S. Valmo\-rin, Y. Zhou, L. Peng and many techniques have been developed for solving such problems. We consider the problem of minimization of the standard cost functional $J(u)$ which is determined in the terms of generalized solution of initial-boundary problem of time-fractional differential equation under considerations. We consider a control via right hand term $u$ and an observation on the whole domain in $L_2$ norm with a Tikhonov regularizer term. First we introduce functional spaces and establish some auxiliary properties of fractional integrals and fractional derivatives. Second we prove an existence and uniqueness result for the state problem. We remind that we deals with an equation of filtration with memory. Our objectives are: a) to prove that there exists a minimizer $u$ of the cost functional $J$; b) to obtain necessary and sufficient conditions for $u$ to be an extremum; c) to obtain constructive algorithm amenable to computations for approximations of the optimal control. An unique solvability of state and conjugate problem is established by the help of Galerkin method and corresponding a priori estimates. Then we prove that the cost functional is coercive, convex and weakly lower semicontinuous. We show the existence of the optimal solution by proving the existence of the weakly convergent minimization sequence satisfying the state equation. The uniqueness follows directly from the strong convexity of the cost functional. This gives us the item a). The item b) is obtained from the first order optimality condition. We justify also the conjugated gradient method to search the optimal control function. On this way we use some results of R. Winther, which allows us to use the conjugate gradient method in our situation and prove its superlinear convergence.

1974 ◽  
Vol 6 (04) ◽  
pp. 622-635 ◽  
Author(s):  
R. Morton ◽  
K. H. Wickwire

A control scheme for the immunisation of susceptibles in the Kermack-McKendrick epidemic model for a closed population is proposed. The bounded control appears linearly in both dynamics and integral cost functionals and any optimal policies are of the “bang-bang” type. The approach uses Dynamic Programming and Pontryagin's Maximum Principle and allows one, for certain values of the cost and removal rates, to apply necessary and sufficient conditions for optimality and show that a one-switch candidate is the optimal control. In the remaining cases we are still able to show that an optimal control, if it exists, has at most one switch.


2019 ◽  
Vol 25 ◽  
pp. 17 ◽  
Author(s):  
Qingmeng Wei ◽  
Jiongmin Yong ◽  
Zhiyong Yu

An optimal control problem is considered for linear stochastic differential equations with quadratic cost functional. The coefficients of the state equation and the weights in the cost functional are bounded operators on the spaces of square integrable random variables. The main motivation of our study is linear quadratic (LQ, for short) optimal control problems for mean-field stochastic differential equations. Open-loop solvability of the problem is characterized as the solvability of a system of linear coupled forward-backward stochastic differential equations (FBSDE, for short) with operator coefficients, together with a convexity condition for the cost functional. Under proper conditions, the well-posedness of such an FBSDE, which leads to the existence of an open-loop optimal control, is established. Finally, as applications of our main results, a general mean-field LQ control problem and a concrete mean-variance portfolio selection problem in the open-loop case are solved.


2020 ◽  
Vol 13 (03) ◽  
pp. 2050008
Author(s):  
Hossein Kheiri ◽  
Mohsen Jafari

In this paper, we propose a fractional-order and two-patch model of tuberculosis (TB) epidemic, in which susceptible, slow latent, fast latent and infectious individuals can travel freely between the patches, but not under treatment infected individuals, due to medical reasons. We obtain the basic reproduction number [Formula: see text] for the model and extend the classical LaSalle’s invariance principle for fractional differential equations. We show that if [Formula: see text], the disease-free equilibrium (DFE) is locally and globally asymptotically stable. If [Formula: see text] we obtain sufficient conditions under which the endemic equilibrium is unique and globally asymptotically stable. We extend the model by inclusion the time-dependent controls (effective treatment controls in both patches and controls of screening on travel of infectious individuals between patches), and formulate a fractional optimal control problem to reduce the spread of the disease. The numerical results show that the use of all controls has the most impact on disease control, and decreases the size of all infected compartments, but increases the size of susceptible compartment in both patches. We, also, investigate the impact of the fractional derivative order [Formula: see text] on the values of the controls ([Formula: see text]). The results show that the maximum levels of effective treatment controls in both patches increase when [Formula: see text] is reduced from 1, while the maximum level of the travel screening control of infectious individuals from patch 2 to patch 1 increases when [Formula: see text] limits to 1.


2019 ◽  
Vol 25 ◽  
pp. 64 ◽  
Author(s):  
Hongwei Mei ◽  
Jiongmin Yong

An optimal control problem is considered for a stochastic differential equation containing a state-dependent regime switching, with a recursive cost functional. Due to the non-exponential discounting in the cost functional, the problem is time-inconsistent in general. Therefore, instead of finding a global optimal control (which is not possible), we look for a time-consistent (approximately) locally optimal equilibrium strategy. Such a strategy can be represented through the solution to a system of partial differential equations, called an equilibrium Hamilton–Jacob–Bellman (HJB) equation which is constructed via a sequence of multi-person differential games. A verification theorem is proved and, under proper conditions, the well-posedness of the equilibrium HJB equation is established as well.


1999 ◽  
Vol 09 (01) ◽  
pp. 45-68 ◽  
Author(s):  
MIN LIANG

We consider the problem of optimal control of a wave equation. A bilinear control is used to bring the state solutions close to a desired profile under a quadratic cost of control. We establish the existence of solutions of the underlying initial boundary-value problem and of an optimal control that minimizes the cost functional. We derive an optimality system by formally differentiating the cost functional with respect to the control and evaluating the result at an optimal control. We establish existence and uniqueness of the solution of the optimality system and thus determine the unique optimal control in terms of the solution of the optimality system.


1988 ◽  
Vol 110 (4) ◽  
pp. 433-436
Author(s):  
Tsu-Tian Lee ◽  
Shiow-Harn Lee

This paper presents the solution of the linear discrete optimal control to achieve poles assigned in a circular region and, meanwhile, accommodate deterministic disturbances. It is shown that by suitable manipulations, the problem can be reduced to a standard discrete quadratic regulator problem that simultaneously ensures: 1) closed-loop-poles all inside a circle centered at (β, 0) with a radius α, where |β| + α ≤ 1, 2) minimizes the cost functional, and 3) accommodates external input disturbance.


Author(s):  
Jiongmin Yong ◽  
Hanxiao Wang

An optimal control problem is considered for a stochastic differential equation with the cost functional determined by a backward stochastic Volterra integral equation (BSVIE, for short). This kind of cost functional can cover the general discounting (including exponential and non-exponential) situation with a recursive feature. It is known that such a problem is time-inconsistent in general. Therefore, instead of finding a global optimal control, we look for a time-consistent locally near optimal equilibrium strategy. With the idea of multi-person differential games, a family of approximate equilibrium strategies is constructed associated with partitions of the time intervals. By sending the mesh size of the time interval partition to zero, an equilibrium Hamilton--Jacobi--Bellman (HJB, for short) equation is derived, through which the equilibrium valued function and an equilibrium strategy are obtained. Under certain conditions, a verification theorem is proved and the well-posedness of the equilibrium HJB is established. As a sort of Feynman-Kac formula for the equilibrium HJB equation, a new class of BSVIEs (containing the diagonal values $Z(r,r)$ of $Z(\cd\,,\cd)$) is naturally introduced and the well-posedness of such kind of equations is briefly discussed.


2021 ◽  
pp. 2009-2021
Author(s):  
Lamyaa H Ali ◽  
Jamil A. Al-Hawasy

The paper is concerned with the state and proof of the solvability theorem of unique state vector solution (SVS) of triple nonlinear hyperbolic boundary value problem (TNLHBVP), via utilizing the Galerkin method (GAM) with the Aubin theorem (AUTH), when the boundary control vector (BCV) is known. Solvability theorem of a boundary optimal control vector (BOCV) with equality and inequality state vector constraints (EINESVC) is proved. We studied the solvability theorem of a unique solution for the adjoint triple boundary value problem (ATHBVP) associated with TNLHBVP. The directional derivation (DRD) of the "Hamiltonian"(DRDH) is deduced. Finally, the necessary theorem (necessary conditions "NCOs") and the sufficient theorem (sufficient conditions" SCOs"), together denoted as NSCOs, for the optimality (OP) of the state constrained problem (SCP) are stated and proved.


Sign in / Sign up

Export Citation Format

Share Document