Evolving Game State Evaluation Functions for a Hybrid Planning Approach

Author(s):  
Xenija Neufeld ◽  
Sanaz Mostaghim ◽  
Diego Perez-Liebana
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 595
Author(s):  
Ray Lattarulo ◽  
Joshué Pérez Rastelli

Automated Driving Systems (ADS) have received a considerable amount of attention in the last few decades, as part of the Intelligent Transportation Systems (ITS) field. However, this technology still lacks total automation capacities while keeping driving comfort and safety under risky scenarios, for example, overtaking, obstacle avoidance, or lane changing. Consequently, this work presents a novel method to resolve the obstacle avoidance and overtaking problems named Hybrid Planning. This solution combines the passenger’s comfort associated with the smoothness of Bézier curves and the reliable capacities of Model Predictive Control (MPC) to react against unexpected conditions, such as obstacles on the lane, overtaking and lane-change based maneuvers. A decoupled linear-model was used for the MPC formulation to ensure short computation times. The obstacles and other vehicles’ information are obtained via V2X (vehicle communications). The tests were performed in an automated Renault Twizy vehicle and they have shown good performance under complex scenarios involving static and moving obstacles at a maximum speed of 60 kph.


2016 ◽  
Vol 25 (01) ◽  
pp. 1660007 ◽  
Author(s):  
André Fabbri ◽  
Frédéric Armetta ◽  
Éric Duchêne ◽  
Salima Hassas

MCTS (Monte Carlo Tree Search) is a well-known and efficient process to cover and evaluate a large range of states for combinatorial problems. We choose to study MCTS for the Computer Go problem, which is one of the most challenging problem in the field of Artificial Intelligence. For this game, a single combinatorial approach does not always lead to a reliable evaluation of the game states. In order to enhance MCTS ability to tackle such problems, one can benefit from game specific knowledge in order to increase the accuracy of the game state evaluation. Such a knowledge is not easy to acquire. It is the result of a constructivist learning mechanism based on the experience of the player. That is why we explore the idea to endow the MCTS with a process inspired by constructivist learning, to self-acquire knowledge from playing experience. In this paper, we propose a complementary process for MCTS called BHRF (Background History Reply Forest), which allows to memorize efficient patterns in order to promote their use through the MCTS process. Our experimental results lead to promising results and underline how self-acquired data can be useful for MCTS based algorithms.


2018 ◽  
Vol 208 ◽  
pp. 05003 ◽  
Author(s):  
Weilong Yang ◽  
Qi Zhang ◽  
Yong Peng

Researches of AI planning in Real-Time Strategy (RTS) games have been widely applied to human behavior modeling and combat simulation. State evaluation is an important research area for AI planning, which ensures the decision accuracy. Since complex interactions exist among different game aspects, the weighted average model usually cannot be well used to compute the evaluation of game state, which results in misleading player’s generation strategy. In this paper, we take dynamic changes and player’s preference into consideration, analyze player’s preference and units’ relationships base on game theory and propose a dynamic hierarchical evaluating network, denoted as DHEN. Experiments show that the modified evaluating algorithm can effectively improve the accuracy of task planning algorithm for RTS games.


Sign in / Sign up

Export Citation Format

Share Document