scholarly journals Improving Graph-based methods for computing qualitative properties of markov decision processes

Author(s):  
Mohammadsadegh Mohagheghi ◽  
Khayyam Salehi

<span>Probabilistic model checking is a formal verification method, which is used to guarantee the correctness of the computer systems with stochastic behaviors. Reachability probabilities are the main class of properties that are proposed in probabilistic model checking. Some graph-based pre-computation can determine those states for which the reachability probability is exactly zero or one. Iterative numerical methods are used to compute the reachability probabilities for the remaining states. In this paper, we focus on the graph-based pre-computations and propose a heuristic to improve the performance of these pre-computations. The proposed heuristic approximates the set of states that are computed in the standard pre-computation methods. The experiments show that the proposed heuristic can compute a main part of the expected states, while reduces the running time by several orders of magnitude.</span>

2014 ◽  
Vol 36 (12) ◽  
pp. 2587-2600
Author(s):  
Cong-Hua ZHOU ◽  
Zhi-Hu XING ◽  
Zhi-Feng LIU ◽  
Chang-Da WANG

2012 ◽  
Vol 103 ◽  
pp. 49-63 ◽  
Author(s):  
Hua Mao ◽  
Yingke Chen ◽  
Manfred Jaeger ◽  
Thomas D. Nielsen ◽  
Kim G. Larsen ◽  
...  

Author(s):  
Clemens Dubslaff ◽  
Patrick Koopmann ◽  
Anni-Yasmin Turhan

AbstractProbabilistic model checking (PMC) is a well-established method for the quantitative analysis of state based operational models such as Markov decision processes. Description logics (DLs) provide a well-suited formalism to describe and reason about knowledge and are used as basis for the web ontology language (OWL). We investigate how such knowledge described by DLs can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose ontologized programs as a formalism that links ontologies to behaviors specified by probabilistic guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Through DL reasoning, inconsistent states in the modeled system can be detected. We present three ways to resolve these inconsistencies, leading to different Markov decision process semantics. We analyze the computational complexity of checking whether an ontologized program is consistent under these semantics. Further, we present and implement a technique for the quantitative analysis of ontologized programs relying on standard DL reasoning and PMC tools. This way, we enable the application of PMC techniques to analyze knowledge-intensive systems.We evaluate our approach and implementation on amulti-server systemcase study,where different DL ontologies are used to provide specifications of different server platforms and situations the system is executed in.


Author(s):  
Kousha Etessami ◽  
Marta Kwiatkowska ◽  
Moshe Vardi ◽  
Mihalis Yannakakis

Author(s):  
Suda Bharadwaj ◽  
Stephane Le Roux ◽  
Guillermo Perez ◽  
Ufuk Topcu

Omega-regular objectives in Markov decision processes (MDPs) reduce to reachability: find a policy which maximizes the probability of reaching a target set of states. Given an MDP, an initial distribution, and a target set of states, such a policy can be computed by most probabilistic model checking tools. If the MDP is only partially specified, i.e., some prob- abilities are unknown, then model-learning techniques can be used to statistically approximate the probabilities and enable the computation of the de- sired policy. For fully specified MDPs, reducing the size of the MDP translates into faster model checking; for partially specified MDPs, into faster learning. We provide reduction techniques that al- low us to remove irrelevant transition probabilities: transition probabilities (known, or to be learned) that do not influence the maximal reachability probability. Among other applications, these reductions can be seen as a pre-processing of MDPs before model checking or as a way to reduce the number of experiments required to obtain a good approximation of an unknown MDP.


Sign in / Sign up

Export Citation Format

Share Document