strategy game
Recently Published Documents


TOTAL DOCUMENTS

215
(FIVE YEARS 57)

H-INDEX

18
(FIVE YEARS 1)

2022 ◽  
pp. 1560-1570
Author(s):  
Rupanada Misra ◽  
Leo Eyombo ◽  
Floyd T. Phillips

In the 21st century, games can potentially be used as serious educational tools. Today's learners are distracted easily, and game-based learning is the silver bullet because it can potentially immerse the students in content and curricula. Not only does game-based learning with its power to engage and motivate users make the course come alive, but it can also provide a platform in critical thinking, creativity, instant feedback, and collaboration. One of the biggest challenges in education is the different learning styles of the students; game-based learning can easily overcome that. Games can be categorized into different genres such as action, adventure, fighting, puzzle, role-playing, simulation, sports, or strategy. Game designers can potentially select the appropriate genre best suited for effective learning. Even with all the advantages of game-based learning, some challenges, such as unwillingness of teachers to change or improper design of educational games, still exist. With students sometimes far ahead in the use of technology, some teachers who are left behind can be intimidated. The conceptual generation gap in this regard is quite wide, and designing, developing, and implementing games in curricula can be expensive. Though some games can be repurposed for education many cannot be repurposed to meet the expectations of the students.


Author(s):  
Zoryana Koval ◽  

The choice of strategy ensures the formation of a certain option that provides a detailed analysis of situations that may arise in the future. As in the game, where each of the participants plans their actions, predicting the actions of other players and the general conditions of situations that may arise as a result of these actions. You should pay attention to the probability of committing players and the probability of a certain situation. It is clear that this is a prediction of a situation that has not yet occurred, but occurred in accordance with the probability of the strategy in the implementation of the actions of each of the participants associated with the risk. The application of the principles, methods and tools of game theory will make it possible to form a complete plan of action in all situations that are expected to occur. The developed action plan of the participants (players, subjects) in accordance with the whole set of situations and possible developments in them, forms a strategy. Game theory is based on the application by each participant of a single strategy, which is a certain algorithm of actions, not a list of them. Such an algorithm, due to its branches, should reflect the possibility of occurrence and development of events and situations. The article proposes methods of selecting and evaluating enterprise strategies through the application of game theory, which will take into account the strategies of competitors (other participants in the conflict situation) or the state of "nature", which embodies the environment of enterprises. The article considers the advantages and disadvantages of using game theory methods to evaluate enterprise strategies, classifies and compares the types of these methods to clarify the peculiarities of their application in certain situations, the peculiarities of the application of strategy selection criteria in this scope.


2021 ◽  
pp. 257-264
Author(s):  
Huale Li ◽  
Rui Cao ◽  
Xiaohan Hou ◽  
Xuan Wang ◽  
Linlin Tang ◽  
...  

2021 ◽  
Author(s):  
Léandre Lavoie-Hudon ◽  
Daniel Lafond ◽  
Katherine Labonté ◽  
Sébastien Tremblay
Keyword(s):  

2021 ◽  
Author(s):  
Alexander Dockhorn ◽  
Jorge Hurtado-Grueso ◽  
Dominik Jeurissen ◽  
Linjie Xu ◽  
Diego Perez-Liebana

2021 ◽  
Author(s):  
Hongli Wang ◽  
Heather K Ortega ◽  
Huriye Atilgan ◽  
Cayla E Murphy ◽  
Alex C Kwan

In a competitive game involving an animal and an opponent, the outcome is contingent on the choices of both players. To succeed, the animal must continually adapt to competitive pressure, or else risk being exploited and lose out on rewards. In this study, we demonstrate that head-fixed mice can be trained to play the iterative competitive game 'matching pennies' against a virtual computer opponent. We find that the animals' performance is well described by a hybrid computational model that includes Q-learning and choice kernels. Comparing between matching pennies and a non-competitive two-armed bandit task, we show that the tasks encourage animals to operate at different regimes of reinforcement learning. To understand the involvement of neuromodulatory mechanisms, we measure fluctuations in pupil size and use multiple linear regression to relate the trial-by-trial transient pupil responses to decision-related variables. The analysis reveals that pupil responses are modulated by observable variables, including choice and outcome, as well as latent variables for value updating, but not action selection. Collectively, these results establish a paradigm for studying competitive decision-making in head-fixed mice and provide insights into the role of arousal-linked neuromodulation in the decision process.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huale Li ◽  
Rui Cao ◽  
Xuan Wang ◽  
Xiaohan Hou ◽  
Tao Qian ◽  
...  

In recent years, deep reinforcement learning (DRL) achieves great success in many fields, especially in the field of games, such as AlphaGo, AlphaZero, and AlphaStar. However, due to the reward sparsity problem, the traditional DRL-based method shows limited performance in 3D games, which contain much higher dimension of state space. To solve this problem, in this paper, we propose an intrinsic-based policy optimization (IBPO) algorithm for reward sparsity. In the IBPO, a novel intrinsic reward is integrated into the value network, which provides an additional reward in the environment with sparse reward, so as to accelerate the training. Besides, to deal with the problem of value estimation bias, we further design three types of auxiliary tasks, which can evaluate the state value and the action more accurately in 3D scenes. Finally, a framework of auxiliary intrinsic-based policy optimization (AIBPO) is proposed, which improves the performance of the IBPO. The experimental results show that the method is able to deal with the reward sparsity problem effectively. Therefore, the proposed method may be applied to real-world scenarios, such as 3-dimensional navigation and automatic driving, which can improve the sample utilization to reduce the cost of interactive sample collected by the real equipment.


Author(s):  
Alexander Dockhorn ◽  
Jorge Hurtado-Grueso ◽  
Dominik Jeurissen ◽  
Linjie Xu ◽  
Diego Perez-Liebana

Sign in / Sign up

Export Citation Format

Share Document