Online Maintenance Prioritization via Monte Carlo Tree Search and Case-based Reasoning

Author(s):  
Michael Hoffman ◽  
Eunhye Song ◽  
Michael Brundage ◽  
Soundar Kumara

Abstract When maintenance resources in a manufacturing system are limited, a challenge arises in determining how to allocate these resources among multiple competing maintenance jobs. We formulate this problem as an online prioritization problem using a Markov decision process (MDP) to model the system behavior and Monte Carlo tree search (MCTS) to seek optimal maintenance actions in various states of the system. Further, we use Case-based Reasoning (CBR) to retain and reuse search experience gathered from MCTS to reduce the computational effort needed over time and to improve decision-making efficiency. We demonstrate that our proposed method results in increased system throughput when compared to existing methods of maintenance prioritization while also reducing the time needed to identify optimal maintenance actions as more experience is gathered. This is especially beneficial in manufacturing settings where maintenance decisions must be made quickly.

Author(s):  
Larkin Liu ◽  
Jun Tao Luo

Flexible implementations of Monte Carlo Tree Search (MCTS), combined with domain specific knowledge and hybridization with other search algorithms, can be a very powerful for the solution of problems in complex planning. We introduce mctreesearch4j, a standard MCTS implementation written as a standard JVM library following key design principles of object oriented programming. We define key class abstractions allowing the MCTS library to flexibly adapt to any well defined Markov Decision Process or turn-based adversarial game. Furthermore, our library is designed to be modular and extensible, utilizing class inheritance and generic typing to standardize custom algorithm definitions. We demon- strate that the design of the MCTS implementation provides ease of adaptation for unique heuristics and customization across varying Markov Decision Process (MDP) domains. In addition, the implementation is reasonably performant and accurate for standard MDP’s. In addition, via the implementation of mctreesearch4j, the nuances of different types of MCTS algorithms are discussed.


Author(s):  
Tuan Dam ◽  
Pascal Klink ◽  
Carlo D'Eramo ◽  
Jan Peters ◽  
Joni Pajarinen

We consider Monte-Carlo Tree Search (MCTS) applied to Markov Decision Processes (MDPs) and Partially Observable MDPs (POMDPs), and the well-known Upper Confidence bound for Trees (UCT) algorithm. In UCT, a tree with nodes (states) and edges (actions) is incrementally built by the expansion of nodes, and the values of nodes are updated through a backup strategy based on the average value of child nodes. However, it has been shown that with enough samples the maximum operator yields more accurate node value estimates than averaging. Instead of settling for one of these value estimates, we go a step further proposing a novel backup strategy which uses the power mean operator, which computes a value between the average and maximum value. We call our new approach Power-UCT, and argue how the use of the power mean operator helps to speed up the learning in MCTS. We theoretically analyze our method providing guarantees of convergence to the optimum. Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w.r.t. state of the art algorithms.


Author(s):  
Shuo Chen ◽  
Ewa Andrejczuk ◽  
Athirai A. Irissappane ◽  
Jie Zhang

In an ad hoc teamwork setting, the team needs to coordinate their activities to perform a task without prior agreement on how to achieve it. The ad hoc agent cannot communicate with its teammates but it can observe their behaviour and plan accordingly. To do so, the existing approaches rely on the teammates' behaviour models. However, the models may not be accurate, which can compromise teamwork. For this reason, we present Ad Hoc Teamwork by Sub-task Inference and Selection (ATSIS) algorithm that uses a sub-task inference without relying on teammates' models. First, the ad hoc agent observes its teammates to infer which sub-tasks they are handling. Based on that, it selects its own sub-task using a partially observable Markov decision process that handles the uncertainty of the sub-task inference. Last, the ad hoc agent uses the Monte Carlo tree search to find the set of actions to perform the sub-task. Our experiments show the benefits of ATSIS for robust teamwork.


2019 ◽  
Vol 36 (06) ◽  
pp. 1940009
Author(s):  
Michael C. Fu

AlphaGo and its successors AlphaGo Zero and AlphaZero made international headlines with their incredible successes in game playing, which have been touted as further evidence of the immense potential of artificial intelligence, and in particular, machine learning. AlphaGo defeated the reigning human world champion Go player Lee Sedol 4 games to 1, in March 2016 in Seoul, Korea, an achievement that surpassed previous computer game-playing program milestones by IBM’s Deep Blue in chess and by IBM’s Watson in the U.S. TV game show Jeopardy. AlphaGo then followed this up by defeating the world’s number one Go player Ke Jie 3-0 at the Future of Go Summit in Wuzhen, China in May 2017. Then, in December 2017, AlphaZero stunned the chess world by dominating the top computer chess program Stockfish (which has a far higher rating than any human) in a 100-game match by winning 28 games and losing none (72 draws) after training from scratch for just four hours! The deep neural networks of AlphaGo, AlphaZero, and all their incarnations are trained using a technique called Monte Carlo tree search (MCTS), whose roots can be traced back to an adaptive multistage sampling (AMS) simulation-based algorithm for Markov decision processes (MDPs) published in Operations Research back in 2005 [Chang, HS, MC Fu, J Hu and SI Marcus (2005). An adaptive sampling algorithm for solving Markov decision processes. Operations Research, 53, 126–139.] (and introduced even earlier in 2002). After reviewing the history and background of AlphaGo through AlphaZero, the origins of MCTS are traced back to simulation-based algorithms for MDPs, and its role in training the neural networks that essentially carry out the value/policy function approximation used in approximate dynamic programming, reinforcement learning, and neuro-dynamic programming is discussed, including some recently proposed enhancements building on statistical ranking & selection research in the operations research simulation community.


Sign in / Sign up

Export Citation Format

Share Document