Semi-Markov Decision Process With Partial Information for Maintenance Decisions

2014 ◽  
Vol 63 (4) ◽  
pp. 891-898 ◽  
Author(s):  
Rengarajan Srinivasan ◽  
Ajith Kumar Parlikad
Author(s):  
Joaquim AP Braga ◽  
António R Andrade

This article models the decision problem of maintaining railway wheelsets as a Markov decision process, with the aim to provide a way to support condition-based maintenance for railway wheelsets. A discussion on the role of the railway wheelsets is provided, as well as some background on the technical standards that guide maintenance decisions. A practical example is explored with the estimation of Markov transition matrices for different condition states that depend on the wheelset diameter, its mileage since last turning action (or renewal) and the damage occurrence. Bearing in mind all the possible maintenance actions, an optimal strategy is achieved, providing a map of best actions depending on the current state of the wheelset.


AI Magazine ◽  
2012 ◽  
Vol 33 (4) ◽  
pp. 82 ◽  
Author(s):  
Prashant J. Doshi

Decision making is a key feature of autonomous systems. It involves choosing optimally between different lines of action in various information contexts that range from perfectly knowing all aspects of the decision problem to having just partial knowledge about it. The physical context often includes other interacting autonomous systems, typically called agents. In this article, I focus on decision making in a multiagent context with partial information about the problem. Relevant research in this complex but realistic setting has converged around two complementary, general frameworks and also introduced myriad specializations on its way. I put the two frameworks, decentralized partially observable Markov decision process (Dec-POMDP) and the interactive partially observable Markov decision process (I-POMDP), in context and review the foundational algorithms for these frameworks, while briefly discussing the advances in their specializations. I conclude by examining the avenues that research pertaining to these frameworks is pursuing.


1984 ◽  
Vol 16 (1) ◽  
pp. 9-9
Author(s):  
Kyung Y. Jo ◽  
Arnold Greenland

We consider a system of three queues in which arriving customers are assigned to other queues if rejected from entry to one queue; and thus the work load of each queue is shared. The objective function considered is a combination of holding costs, routing costs, and customer service rewards. We first establish the characteristics of optimal control policies via a Markov decision process formulation. Next, the decomposed problems with partial information for each server are considered and the results compared with the original problem are shown. Appropriate combinations of optimal solutions for the decomposed problems are then used either in approximating the centralized optimal policy or in determining a good starting policy for successive approximation of the multidimensional Markov decision process. Numerical results of specific models are also presented.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1385
Author(s):  
Irais Mora-Ochomogo ◽  
Marco Serrato ◽  
Jaime Mora-Vargas ◽  
Raha Akhavan-Tabatabaei

Natural disasters represent a latent threat for every country in the world. Due to climate change and other factors, statistics show that they continue to be on the rise. This situation presents a challenge for the communities and the humanitarian organizations to be better prepared and react faster to natural disasters. In some countries, in-kind donations represent a high percentage of the supply for the operations, which presents additional challenges. This research proposes a Markov Decision Process (MDP) model to resemble operations in collection centers, where in-kind donations are received, sorted, packed, and sent to the affected areas. The decision addressed is when to send a shipment considering the uncertainty of the donations’ supply and the demand, as well as the logistics costs and the penalty of unsatisfied demand. As a result of the MDP a Monotone Optimal Non-Decreasing Policy (MONDP) is proposed, which provides valuable insights for decision-makers within this field. Moreover, the necessary conditions to prove the existence of such MONDP are presented.


Sign in / Sign up

Export Citation Format

Share Document