Optimizing maintenance decisions in railway wheelsets: A Markov decision process approach

Author(s):  
Joaquim AP Braga ◽  
António R Andrade

This article models the decision problem of maintaining railway wheelsets as a Markov decision process, with the aim to provide a way to support condition-based maintenance for railway wheelsets. A discussion on the role of the railway wheelsets is provided, as well as some background on the technical standards that guide maintenance decisions. A practical example is explored with the estimation of Markov transition matrices for different condition states that depend on the wheelset diameter, its mileage since last turning action (or renewal) and the damage occurrence. Bearing in mind all the possible maintenance actions, an optimal strategy is achieved, providing a map of best actions depending on the current state of the wheelset.

Author(s):  
Jan Buermann ◽  
Jie Zhang

In full-knowledge multi-robot adversarial patrolling, a group of robots have to detect an adversary who knows the robots' strategy. The adversary can easily take advantage of any deterministic patrolling strategy, which necessitates the employment of a randomised strategy. While the Markov decision process has been the dominant methodology in computing the penetration detection probabilities, we apply enumerative combinatorics to characterise the penetration detection probabilities. It allows us to provide the closed formulae of these probabilities and facilitates characterising optimal random defence strategies. Comparing to iteratively updating the Markov transition matrices, our methods significantly reduces the time and space complexity of solving the problem. We use this method to tackle four penetration configurations.


MENDEL ◽  
2017 ◽  
Vol 23 (1) ◽  
pp. 141-148 ◽  
Author(s):  
Ondrej Grunt ◽  
Jan Plucar ◽  
Marketa Stakova ◽  
Tomas Janecko ◽  
Ivan Zelinka

This paper presents an application of Markov Decision Process method for modeling of selected marketing processes. Based on available realistic data, MDP model is constructed. Customer behavior is represented by a set of states of the model with assigned rewards corresponding to the expected return value. Outcoming arcs then represent actions available to the customer in current state. Favourable outcome rate of available actions is then analyzed, with emphasis on suitability of the model for future predictions of customer behavior.


2007 ◽  
Vol 2007 ◽  
pp. 1-31 ◽  
Author(s):  
P. T. Kabamba ◽  
W.-C. Lin ◽  
S. M. Meerkov

This paper is intended to model a decision maker as a rational probabilistic decider (RPD) and to investigate its behavior in stationary and symmetric Markov switch environments. RPDs take their decisions based on penalty functions defined by the environment. The quality of decision making depends on a parameter referred to as level of rationality. The dynamic behavior of RPDs is described by an ergodic Markov chain. Two classes of RPDs are considered—local and global. The former take their decisions based on the penalty in the current state while the latter consider all states. It is shown that asymptotically (in time and in the level of rationality) both classes behave quite similarly. However, the second largest eigenvalue of Markov transition matrices for global RPDs is smaller than that for local ones, indicating faster convergence to the optimal state. As an illustration, the behavior of a chief executive officer, modeled as a global RPD, is considered, and it is shown that the company performance may or may not be optimized—depending on the pay structure employed. While the current paper investigates individual RPDs, a companion paper will address collective behavior.


2020 ◽  
pp. 13-19
Author(s):  
N.A. Mahutov ◽  
I.V. Gadolina ◽  
S.G. Lebedinskiy ◽  
E.S. Oganyan ◽  
A.A. Bautin

Methods and approaches to tests under random loading are considered, their role is characterized. To ensure the random nature of loading, a modeling method based on Markov transition matrices and real processes recorded in operation is proposed. Keywords: random loading process, Markov repetition matrices, resource estimation, corrected linear hypothesis, parameter of completeness of the loading spectrum. [email protected]


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1385
Author(s):  
Irais Mora-Ochomogo ◽  
Marco Serrato ◽  
Jaime Mora-Vargas ◽  
Raha Akhavan-Tabatabaei

Natural disasters represent a latent threat for every country in the world. Due to climate change and other factors, statistics show that they continue to be on the rise. This situation presents a challenge for the communities and the humanitarian organizations to be better prepared and react faster to natural disasters. In some countries, in-kind donations represent a high percentage of the supply for the operations, which presents additional challenges. This research proposes a Markov Decision Process (MDP) model to resemble operations in collection centers, where in-kind donations are received, sorted, packed, and sent to the affected areas. The decision addressed is when to send a shipment considering the uncertainty of the donations’ supply and the demand, as well as the logistics costs and the penalty of unsatisfied demand. As a result of the MDP a Monotone Optimal Non-Decreasing Policy (MONDP) is proposed, which provides valuable insights for decision-makers within this field. Moreover, the necessary conditions to prove the existence of such MONDP are presented.


Sign in / Sign up

Export Citation Format

Share Document