scholarly journals A policy iteration method for Mean Field Games

Author(s):  
Simone Cacace ◽  
Fabio Camilli ◽  
Alessandro Goffi

The policy iteration method is a classical algorithm for solving optimal control problems. In this paper, we introduce a policy iteration method for Mean Field Games systems, and we study the convergence of this procedure to a solution of the problem. We also introduce suitable discretizations to numerically solve both stationary and evolutive problems. We show the convergence of the policy iteration method for the discrete problem and we study the performance of the proposed algorithm on some examples in dimension one and two.

2020 ◽  
Vol 26 ◽  
pp. 41
Author(s):  
Tianxiao Wang

This article is concerned with linear quadratic optimal control problems of mean-field stochastic differential equations (MF-SDE) with deterministic coefficients. To treat the time inconsistency of the optimal control problems, linear closed-loop equilibrium strategies are introduced and characterized by variational approach. Our developed methodology drops the delicate convergence procedures in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. When the MF-SDE reduces to SDE, our Riccati system coincides with the analogue in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. However, these two systems are in general different from each other due to the conditional mean-field terms in the MF-SDE. Eventually, the comparisons with pre-committed optimal strategies, open-loop equilibrium strategies are given in details.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Hui Min ◽  
Ying Peng ◽  
Yongli Qin

We discuss a new type of fully coupled forward-backward stochastic differential equations (FBSDEs) whose coefficients depend on the states of the solution processes as well as their expected values, and we call them fully coupled mean-field forward-backward stochastic differential equations (mean-field FBSDEs). We first prove the existence and the uniqueness theorem of such mean-field FBSDEs under some certain monotonicity conditions and show the continuity property of the solutions with respect to the parameters. Then we discuss the stochastic optimal control problems of mean-field FBSDEs. The stochastic maximum principles are derived and the related mean-field linear quadratic optimal control problems are also discussed.


2020 ◽  
Vol 54 (5) ◽  
pp. 1419-1435
Author(s):  
Abderrahmane Akkouche ◽  
Mohamed Aidene

In this paper, the Picard’s iteration method is proposed to obtain an approximate analytical solution for linear and nonlinear optimal control problems with quadratic objective functional. It consists in deriving the necessary optimality conditions using the minimum principle of Pontryagin, which result in a two-point-boundary-value-problem (TPBVP). By applying the Picard’s iteration method to the resulting TPBVP, the optimal control law and the optimal trajectory are obtained in the form of a truncated series. The efficiency of the proposed technique for handling optimal control problems is illustrated by four numerical examples, and comparison with other methods is made.


Sign in / Sign up

Export Citation Format

Share Document