Learning Game Theory from John Harsanyi

2001 ◽  
Vol 36 (1) ◽  
pp. 20-25 ◽  
Author(s):  
Roger Myerson
2015 ◽  
Vol 28 (1) ◽  
pp. 191-199 ◽  
Author(s):  
Zhongjie Lin ◽  
Hugh hong-tao Liu
Keyword(s):  

2010 ◽  
Vol 32 (2) ◽  
pp. 145-173
Author(s):  
PHILIPPE FONTAINE

This paper traces interpersonal utility comparisons and bargaining in the work of John Harsanyi from the 1950s to the mid-1960s. As his preoccupation with how theorists can obtain information about agents moved from an approach centered on empathetic understanding to the more distanced perspective associated with game theory, Harsanyi shifted emphasis from the social scientist’s lack of information vis-à-vis agents to agents’ lack of information about each other. In the process, he provided economists with an analytical framework they could use to study problems related to the distribution of information among agents while consolidating the perspective of a distant observer whose knowledge can replace that of real people.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2977
Author(s):  
Yan Li ◽  
Mengyu Zhao ◽  
Huazhi Zhang ◽  
Fuling Yang ◽  
Suyu Wang

Most current studies on multi-agent evolution based on deep learning take a cooperative equilibrium strategy, while interactive self-learning is not always considered. An interactive self-learning game and evolution method based on non-cooperative equilibrium (ISGE-NCE) is proposed to take the benefits of both game theory and interactive learning for multi-agent confrontation evolution. A generative adversarial network (GAN) is designed combining with multi-agent interactive self-learning, and the non-cooperative equilibrium strategy is well adopted within the framework of interactive self-learning, aiming for high evolution efficiency and interest. For assessment, three typical multi-agent confrontation experiments are designed and conducted. The results show that, first, in terms of training speed, the ISGE-NCE produces a training convergence rate of at least 46.3% higher than that of the method without considering interactive self-learning. Second, the evolution rate of the interference and detection agents reaches 60% and 80%, respectively, after training by using our method. In the three different experiment scenarios, compared with the DDPG, our ISGE-NCE method improves the multi-agent evolution effectiveness by 43.4%, 50%, and 20%, respectively, with low training costs. The performances demonstrate the significant superiority of our ISGE-NCE method in swarm intelligence.


Sign in / Sign up

Export Citation Format

Share Document