Interval-valued Markov Chain Abstraction of Stochastic Systems using Barrier Functions

Author(s):  
Maxence Dutreix ◽  
Cesar Santoyo ◽  
Matthew Abate ◽  
Samuel Coogan
2020 ◽  
Vol 53 (2) ◽  
pp. 2441-2446
Author(s):  
Niloofar Jahanshahi ◽  
Pushpak Jagtap ◽  
Majid Zamani

Author(s):  
Andreas A. Malikopoulos

The growing demand for making autonomous intelligent systems that can learn how to improve their performance while interacting with their environment has induced significant research on computational cognitive models. Computational intelligence, or rationality, can be achieved by modeling a system and the interaction with its environment through actions, perceptions, and associated costs. A widely adopted paradigm for modeling this interaction is the controlled Markov chain. In this context, the problem is formulated as a sequential decision-making process in which an intelligent system has to select those control actions in several time steps to achieve long-term goals. This paper presents a rollout control algorithm that aims to build an online decision-making mechanism for a controlled Markov chain. The algorithm yields a lookahead suboptimal control policy. Under certain conditions, a theoretical bound on its performance can be established.


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


Author(s):  
Javad Sovizi ◽  
Suren Kumar ◽  
Venkat Krovi

Abstract We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.


Automatica ◽  
2005 ◽  
Vol 41 (6) ◽  
pp. 923-934 ◽  
Author(s):  
S. Battilotti ◽  
A. De Santis

2007 ◽  
Vol 49 (2) ◽  
pp. 231-241 ◽  
Author(s):  
Zhenting Hou ◽  
Hailing Dong ◽  
Peng Shi

abstractIn this paper, finite phase semi-Markov processes are introduced. By introducing variables and a simple transformation, every finite phase semi-Markov process can be transformed to a finite Markov chain which is called its associated Markov chain. A consequence of this is that every phase semi-Markovian switching system may be equivalently expressed as its associated Markovian switching system. Existing results for Markovian switching systems may then be applied to analyze phase semi-Markovian switching systems. In the following, we obtain asymptotic stability for the distribution of nonlinear stochastic systems with semi-Markovian switching. The results can also be extended to general semi-Markovian switching systems. Finally, an example is given to illustrate the feasibility and effectiveness of the theoretical results obtained.


Sign in / Sign up

Export Citation Format

Share Document