DISCRETE TIME DYNAMIC GAMES WITH A CONTINUUM OF PLAYERS I: DECOMPOSABLE GAMES

2002 ◽  
Vol 04 (03) ◽  
pp. 331-342 ◽  
Author(s):  
AGNIESZKA WISZNIEWSKA-MATYSZKIEL

The purpose of this paper is to present some simple properties and applications of dynamic games with discrete time and a continuum of players. For such games relations between dynamic equilibria and families of static equilibria in the corresponding static games, as well as between dynamic and static best response sets are examined and an equivalence theorem is proven. The existence of a dynamic equilibrium is also proven. These results are counterintuitive since they differ from results that can be obtained in similar games with a finite number of players. The theoretical results are illustrated with examples describing voting and exploitation of ecological systems.

2003 ◽  
Vol 05 (01) ◽  
pp. 27-40 ◽  
Author(s):  
AGNIESZKA WISZNIEWSKA-MATYSZKIEL

In this paper we consider dynamic games with continuum of players which can constitute a framework to model large financial markets. They are called semi-decomposable games. In semi-decomposable games the system changes in response to a (possibly distorted) aggregate of players' decisions and the payoff is a sum of discounted semi-instantaneous payoffs. The purpose of this paper is to present some simple properties and applications of these games. The main result is an equivalence between dynamic equilibria and families of static equilibria in the corresponding static perfect-foresight games, as well as between dynamic and static best response sets. The existence of a dynamic equilibrium is also proven. These results are counterintuitive since they differ from results that can be obtained in games with a finite number of players. The theoretical results are illustrated with examples describing large financial markets: markets for futures and stock exchanges.


Author(s):  
João P. Hespanha

This chapter focuses on the computation of the saddle-point equilibrium of a zero-sum discrete time dynamic game in a state-feedback policy. It begins by considering solution methods for two-player zero sum dynamic games in discrete time, assuming a finite horizon stage-additive cost that Player 1 wants to minimize and Player 2 wants to maximize, and taking into account a state feedback information structure. The discussion then turns to discrete time dynamic programming, the use of MATLAB to solve zero-sum games with finite state spaces and finite action spaces, and discrete time linear quadratic dynamic games. The chapter concludes with a practice exercise that requires computing the cost-to-go for each state of the tic-tac-toe game, and the corresponding solution.


Author(s):  
João P. Hespanha

This chapter focuses on one-player discrete time dynamic games, that is, the optimal control of a discrete time dynamical system. It first considers solution methods for one-player dynamic games, which are simple optimizations, before discussing discrete time cost-to-go. It shows that, regardless of the information structure (open loop, state feedback or other), it is not possible to obtain a cost lower than the cost-to-go. A computationally efficient recursive technique that can be used to compute the cost-to-go is dynamic programming. After providing an overview of discrete time dynamic programming, the chapter explores the complexity of computing the cost-to-go at all stages, the use of MATLAB to solve finite one-player games, and linear quadratic dynamic games. It concludes with a practice exercise and the corresponding solution, along with an additional exercise.


Sign in / Sign up

Export Citation Format

Share Document