New results on the existence of open loop Nash equilibria in discrete time dynamic games via generalized Nash games

2018 ◽  
Vol 89 (2) ◽  
pp. 157-172 ◽  
Author(s):  
Mathew P. Abraham ◽  
Ankur A. Kulkarni
Author(s):  
João P. Hespanha

This chapter focuses on one-player discrete time dynamic games, that is, the optimal control of a discrete time dynamical system. It first considers solution methods for one-player dynamic games, which are simple optimizations, before discussing discrete time cost-to-go. It shows that, regardless of the information structure (open loop, state feedback or other), it is not possible to obtain a cost lower than the cost-to-go. A computationally efficient recursive technique that can be used to compute the cost-to-go is dynamic programming. After providing an overview of discrete time dynamic programming, the chapter explores the complexity of computing the cost-to-go at all stages, the use of MATLAB to solve finite one-player games, and linear quadratic dynamic games. It concludes with a practice exercise and the corresponding solution, along with an additional exercise.


2002 ◽  
Vol 04 (03) ◽  
pp. 331-342 ◽  
Author(s):  
AGNIESZKA WISZNIEWSKA-MATYSZKIEL

The purpose of this paper is to present some simple properties and applications of dynamic games with discrete time and a continuum of players. For such games relations between dynamic equilibria and families of static equilibria in the corresponding static games, as well as between dynamic and static best response sets are examined and an equivalence theorem is proven. The existence of a dynamic equilibrium is also proven. These results are counterintuitive since they differ from results that can be obtained in similar games with a finite number of players. The theoretical results are illustrated with examples describing voting and exploitation of ecological systems.


Author(s):  
João P. Hespanha

This chapter focuses on the computation of the saddle-point equilibrium of a zero-sum discrete time dynamic game in a state-feedback policy. It begins by considering solution methods for two-player zero sum dynamic games in discrete time, assuming a finite horizon stage-additive cost that Player 1 wants to minimize and Player 2 wants to maximize, and taking into account a state feedback information structure. The discussion then turns to discrete time dynamic programming, the use of MATLAB to solve zero-sum games with finite state spaces and finite action spaces, and discrete time linear quadratic dynamic games. The chapter concludes with a practice exercise that requires computing the cost-to-go for each state of the tic-tac-toe game, and the corresponding solution.


Author(s):  
João P. Hespanha

This chapter focuses on one-player continuous time dynamic games, that is, the optimal control of a continuous time dynamical system. It begins by considering a one-player continuous time differential game in which the (only) player wants to minimize either using an open-loop policy or a state-feedback policy. It then discusses continuous time cost-to-go, with the following conclusion: regardless of the information structure considered (open loop, state feedback, or other), it is not possible to obtain a cost lower than cost-to-go. It also explores continuous time dynamic programming, linear quadratic dynamic games, and differential games with variable termination time before concluding with a practice exercise and the corresponding solution.


Sign in / Sign up

Export Citation Format

Share Document