scholarly journals Formula-E race strategy development using artificial neural networks and Monte Carlo tree search

2020 ◽  
Vol 32 (18) ◽  
pp. 15191-15207 ◽  
Author(s):  
Xuze Liu ◽  
Abbas Fotouhi

Abstract Energy management has been one of the most important parts in electric race strategies since the Fédération Internationale de l’Automobile Formula-E championships were launched in 2014. Since that time, a number of unfavorable race finishes have been witnessed due to poor energy management. Previous researches have been focused on managing the power flow between different energy sources or different energy consumers based on a fixed cycle. However, there is no published work in the literature about energy management of a full electric racing car on repeated course but with changeable settings and driving styles. Different from traditional energy management problems, the electric race strategy is more of a multi-stage decision-making problem which has a very large scale. Meanwhile, this is a time-critical task in motorsport where fast prediction tools are needed and decisions have to be made in seconds to benefit the final outcome of the race. In this study, the use of artificial neural networks (ANN) and tree search techniques is investigated as an approach to solve such a large-scale problem. ANN prediction models are developed to replace the traditional lap time simulation as a much faster performance prediction tool. Implementation of Monte Carlo tree search based on the proposed ANN fast prediction models has provided decent capability to generate decision-making solution for both pre-race planning and in-race reaction to unexpected scenarios.

2021 ◽  
Vol 11 (5) ◽  
pp. 2056
Author(s):  
Alba Cotarelo ◽  
Vicente García-Díaz ◽  
Edward Rolando Núñez-Valdez ◽  
Cristian González García ◽  
Alberto Gómez ◽  
...  

Monte Carlo Tree Search is one of the main search methods studied presently. It has demonstrated its efficiency in the resolution of many games such as Go or Settlers of Catan and other different problems. There are several optimizations of Monte Carlo, but most of them need heuristics or some domain language at some point, making very difficult its application to other problems. We propose a general and optimized implementation of Monte Carlo Tree Search using neural networks without extra knowledge of the problem. As an example of our proposal, we made use of the Dots and Boxes game. We tested it against other Monte Carlo system which implements specific knowledge for this problem. Our approach improves accuracy, reaching a winning rate of 81% over previous research but the generalization penalizes performance.


2020 ◽  
Vol 34 (06) ◽  
pp. 9983-9991
Author(s):  
Linnan Wang ◽  
Yiyang Zhao ◽  
Yuu Jinnai ◽  
Yuandong Tian ◽  
Rodrigo Fonseca

Neural Architecture Search (NAS) has shown great success in automating the design of neural networks, but the prohibitive amount of computations behind current NAS methods requires further investigations in improving the sample efficiency and the network evaluation cost to get better results in a shorter time. In this paper, we present a novel scalable Monte Carlo Tree Search (MCTS) based NAS agent, named AlphaX, to tackle these two aspects. AlphaX improves the search efficiency by adaptively balancing the exploration and exploitation at the state level, and by a Meta-Deep Neural Network (DNN) to predict network accuracies for biasing the search toward a promising region. To amortize the network evaluation cost, AlphaX accelerates MCTS rollouts with a distributed design and reduces the number of epochs in evaluating a network by transfer learning, which is guided with the tree structure in MCTS. In 12 GPU days and 1000 samples, AlphaX found an architecture that reaches 97.84% top-1 accuracy on CIFAR-10, and 75.5% top-1 accuracy on ImageNet, exceeding SOTA NAS methods in both the accuracy and sampling efficiency. Particularly, we also evaluate AlphaX on NASBench-101, a large scale NAS dataset; AlphaX is 3x and 2.8x more sample efficient than Random Search and Regularized Evolution in finding the global optimum. Finally, we show the searched architecture improves a variety of vision applications from Neural Style Transfer, to Image Captioning and Object Detection.


Author(s):  
Li-Cheng Lan ◽  
Wei Li ◽  
Ting-Han Wei ◽  
I-Chen Wu

Many of the strongest game playing programs use a combination of Monte Carlo tree search (MCTS) and deep neural networks (DNN), where the DNNs are used as policy or value evaluators. Given a limited budget, such as online playing or during the self-play phase of AlphaZero (AZ) training, a balance needs to be reached between accurate state estimation and more MCTS simulations, both of which are critical for a strong game playing agent. Typically, larger DNNs are better at generalization and accurate evaluation, while smaller DNNs are less costly, and therefore can lead to more MCTS simulations and bigger search trees with the same budget. This paper introduces a new method called the multiple policy value MCTS (MPV-MCTS), which combines multiple policy value neural networks (PV-NNs) of various sizes to retain advantages of each network, where two PV-NNs f_S and f_L are used in this paper. We show through experiments on the game NoGo that a combined f_S and f_L MPV-MCTS outperforms single PV-NN with policy value MCTS, called PV-MCTS. Additionally, MPV-MCTS also outperforms PV-MCTS for AZ training.


Sign in / Sign up

Export Citation Format

Share Document