Reduction Techniques for Model Checking Markov Decision Processes

Author(s):  
Frank Ciesinski ◽  
Christel Baier ◽  
Marcus Größer ◽  
Joachim Klein
2014 ◽  
Vol 36 (12) ◽  
pp. 2587-2600
Author(s):  
Cong-Hua ZHOU ◽  
Zhi-Hu XING ◽  
Zhi-Feng LIU ◽  
Chang-Da WANG

2012 ◽  
Vol 103 ◽  
pp. 49-63 ◽  
Author(s):  
Hua Mao ◽  
Yingke Chen ◽  
Manfred Jaeger ◽  
Thomas D. Nielsen ◽  
Kim G. Larsen ◽  
...  

Author(s):  
Kousha Etessami ◽  
Marta Kwiatkowska ◽  
Moshe Vardi ◽  
Mihalis Yannakakis

Author(s):  
Mohammadsadegh Mohagheghi ◽  
Khayyam Salehi

<span>Probabilistic model checking is a formal verification method, which is used to guarantee the correctness of the computer systems with stochastic behaviors. Reachability probabilities are the main class of properties that are proposed in probabilistic model checking. Some graph-based pre-computation can determine those states for which the reachability probability is exactly zero or one. Iterative numerical methods are used to compute the reachability probabilities for the remaining states. In this paper, we focus on the graph-based pre-computations and propose a heuristic to improve the performance of these pre-computations. The proposed heuristic approximates the set of states that are computed in the standard pre-computation methods. The experiments show that the proposed heuristic can compute a main part of the expected states, while reduces the running time by several orders of magnitude.</span>


Author(s):  
Suda Bharadwaj ◽  
Stephane Le Roux ◽  
Guillermo Perez ◽  
Ufuk Topcu

Omega-regular objectives in Markov decision processes (MDPs) reduce to reachability: find a policy which maximizes the probability of reaching a target set of states. Given an MDP, an initial distribution, and a target set of states, such a policy can be computed by most probabilistic model checking tools. If the MDP is only partially specified, i.e., some prob- abilities are unknown, then model-learning techniques can be used to statistically approximate the probabilities and enable the computation of the de- sired policy. For fully specified MDPs, reducing the size of the MDP translates into faster model checking; for partially specified MDPs, into faster learning. We provide reduction techniques that al- low us to remove irrelevant transition probabilities: transition probabilities (known, or to be learned) that do not influence the maximal reachability probability. Among other applications, these reductions can be seen as a pre-processing of MDPs before model checking or as a way to reduce the number of experiments required to obtain a good approximation of an unknown MDP.


Sign in / Sign up

Export Citation Format

Share Document