scholarly journals RLCard: A Platform for Reinforcement Learning in Card Games

Author(s):  
Daochen Zha ◽  
Kwei-Herng Lai ◽  
Songyi Huang ◽  
Yuanpu Cao ◽  
Keerthana Reddy ◽  
...  

We present RLCard, a Python platform for reinforcement learning research and development in card games. RLCard supports various card environments and several baseline algorithms with unified easy-to-use interfaces, aiming at bridging reinforcement learning and imperfect information games. The platform provides flexible configurations of state representation, action encoding, and reward design. RLCard also supports visualizations for algorithm debugging. In this demo, we showcase two representative environments and their visualization results. We conclude this demo with challenges and research opportunities brought by RLCard. A video is available on YouTube.

2020 ◽  
Vol 283 ◽  
pp. 103218 ◽  
Author(s):  
Christian Kroer ◽  
Tuomas Sandholm

2020 ◽  
Vol 65 (2) ◽  
pp. 31
Author(s):  
T.V. Pricope

Many real-world applications can be described as large-scale games of imperfect information. This kind of games is particularly harder than the deterministic one as the search space is even more sizeable. In this paper, I want to explore the power of reinforcement learning in such an environment; that is why I take a look at one of the most popular game of such type, no limit Texas Hold’em Poker, yet unsolved, developing multiple agents with different learning paradigms and techniques and then comparing their respective performances. When applied to no-limit Hold’em Poker, deep reinforcement learning agents clearly outperform agents with a more traditional approach. Moreover, if these last agents rival a human beginner level of play, the ones based on reinforcement learning compare to an amateur human player. The main algorithm uses Fictitious Play in combination with ANNs and some handcrafted metrics. We also applied the main algorithm to another game of imperfect information, less complex than Poker, in order to show the scalability of this solution and the increase in performance when put neck in neck with established classical approaches from the reinforcement learning literature.


Author(s):  
Mitsuo Wakatsuki ◽  
Mari Fujimura ◽  
Tetsuro Nishino

The authors are concerned with a card game called Daihinmin (Extreme Needy), which is a multi-player imperfect information game. Using Marvin Minsky's “Society of Mind” theory, they attempt to model the workings of the minds of game players. The UEC Computer Daihinmin Competitions (UECda) have been held at the University of Electro-Communications since 2006, to bring together competitive client programs that correspond to players of Daihinmin, and contest their strengths. In this paper, the authors extract the behavior of client programs from actual competition records of the computer Daihinmin, and propose a method of building a system that determines the parameters of Daihinmin agencies by machine learning.


Author(s):  
Darse Billings ◽  
Aaron Davidson ◽  
Terence Schauenberg ◽  
Neil Burch ◽  
Michael Bowling ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document