scholarly journals A class of linear quadratic dynamic optimization problems with state dependent constraints

2019 ◽  
Vol 91 (2) ◽  
pp. 325-355 ◽  
Author(s):  
Rajani Singh ◽  
Agnieszka Wiszniewska-Matyszkiel

Abstract In this paper, we analyse a wide class of discrete time one-dimensional dynamic optimization problems—with strictly concave current payoffs and linear state dependent constraints on the control parameter as well as non-negativity constraint on the state variable and control. This model suits well economic problems like extraction of a renewable resource (e.g. a fishery or forest harvesting). The class of sub-problems considered encompasses a linear quadratic optimal control problem as well as models with maximal carrying capacity of the environment (saturation). This problem is also interesting from theoretical point of view—although it seems simple in its linear quadratic form, calculation of the optimal control is nontrivial because of constraints and the solutions has a complicated form. We consider both the infinite time horizon problem and its finite horizon truncations.

2020 ◽  
Vol 26 ◽  
pp. 41
Author(s):  
Tianxiao Wang

This article is concerned with linear quadratic optimal control problems of mean-field stochastic differential equations (MF-SDE) with deterministic coefficients. To treat the time inconsistency of the optimal control problems, linear closed-loop equilibrium strategies are introduced and characterized by variational approach. Our developed methodology drops the delicate convergence procedures in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. When the MF-SDE reduces to SDE, our Riccati system coincides with the analogue in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. However, these two systems are in general different from each other due to the conditional mean-field terms in the MF-SDE. Eventually, the comparisons with pre-committed optimal strategies, open-loop equilibrium strategies are given in details.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


2012 ◽  
Vol 12 (10) ◽  
pp. 3176-3192 ◽  
Author(s):  
Ignacio G. del Amo ◽  
David A. Pelta ◽  
Juan R. González ◽  
Antonio D. Masegosa

Sign in / Sign up

Export Citation Format

Share Document