Parallel Hierarchical Pre-Gauss-Seidel Value Iteration Algorithm

2018 ◽  
Vol 10 (2) ◽  
pp. 1-22
Author(s):  
Sanaa Chafik ◽  
Abdelhadi Larach ◽  
Cherki Daoui

The standard Value Iteration (VI) algorithm, referred to as Value Iteration Pre-Jacobi (PJ-VI) algorithm, is the simplest Value Iteration scheme, and the well-known algorithm for solving Markov Decision Processes (MDPs). In the literature, several versions of VI algorithm were developed in order to reduce the number of iterations: the VI Jacobi (VI-J) algorithm, the Value Iteration Pre-Gauss-Seidel (VI-PGS) algorithm and the VI Gauss-Seidel (VI-GS) algorithm. In this article, the authors combine the advantages of VI Pre Gauss-Seidel algorithm, the decomposition technique and the parallelism in order to propose a new Parallel Hierarchical VI Pre-Gauss-Seidel algorithm. Experimental results show that their approach performs better than the traditional VI schemes in the case where the global problem can be decomposed into smaller problems.

2015 ◽  
Vol 13 (3) ◽  
pp. 47-57 ◽  
Author(s):  
Sanaa Chafik ◽  
Cherki Daoui

As many real applications need a large amount of states, the classical methods are intractable for solving large Markov Decision Processes. The decomposition technique basing on the topology of each state in the associated graph and the parallelization technique are very useful methods to cope with this problem. In this paper, the authors propose a Modified Value Iteration algorithm, adding the parallelism technique. They test their implementation on artificial data using an Open MP that offers a significant speed-up.


Author(s):  
Mahsa Ghasemi ◽  
Ufuk Topcu

In conventional partially observable Markov decision processes, the observations that the agent receives originate from fixed known distributions. However, in a variety of real-world scenarios, the agent has an active role in its perception by selecting which observations to receive. We avoid combinatorial expansion of the action space from integration of planning and perception decisions, through a greedy strategy for observation selection that minimizes an information-theoretic measure of the state uncertainty. We develop a novel point-based value iteration algorithm that incorporates this greedy strategy to pick perception actions for each sampled belief point in each iteration. As a result, not only the solver requires less belief points to approximate the reachable subspace of the belief simplex, but it also requires less computation per iteration. Further, we prove that the proposed algorithm achieves a near-optimal guarantee on value function with respect to an optimal perception strategy, and demonstrate its performance empirically.


2001 ◽  
Vol 14 ◽  
pp. 29-51 ◽  
Author(s):  
N. L. Zhang ◽  
W. Zhang

Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for finding optimal policies for POMDPs. It typically takes a large number of iterations to converge. This paper proposes a method for accelerating the convergence of value iteration. The method has been evaluated on an array of benchmark problems and was found to be very effective: It enabled value iteration to converge after only a few iterations on all the test problems.


Author(s):  
YAODONG NI ◽  
ZHI-QIANG LIU

Partially observable Markov decision processes (POMDPs) are powerful for planning under uncertainty. However, it is usually impractical to employ a POMDP with exact parameters to model the real-life situation precisely, due to various reasons such as limited data for learning the model, inability of exact POMDPs to model dynamic situations, etc. In this paper, assuming that the parameters of POMDPs are imprecise but bounded, we formulate the framework of bounded-parameter partially observable Markov decision processes (BPOMDPs). A modified value iteration is proposed as a basic strategy for tackling parameter imprecision in BPOMDPs. In addition, we design the UL-based value iteration algorithm, in which each value backup is based on two sets of vectors called U-set and L-set. We propose four strategies for computing U-set and L-set. We analyze theoretically the computational complexity and the reward loss of the algorithm. The effectiveness and robustness of the algorithm are shown empirically.


2008 ◽  
Vol 31 ◽  
pp. 431-472 ◽  
Author(s):  
C. Wang ◽  
S. Joshi ◽  
R. Khardon

Markov decision processes capture sequential decision making under uncertainty, where an agent must choose actions so as to optimize long term reward. The paper studies efficient reasoning mechanisms for Relational Markov Decision Processes (RMDP) where world states have an internal relational structure that can be naturally described in terms of objects and relations among them. Two contributions are presented. First, the paper develops First Order Decision Diagrams (FODD), a new compact representation for functions over relational structures, together with a set of operators to combine FODDs, and novel reduction techniques to keep the representation small. Second, the paper shows how FODDs can be used to develop solutions for RMDPs, where reasoning is performed at the abstract level and the resulting optimal policy is independent of domain size (number of objects) or instantiation. In particular, a variant of the value iteration algorithm is developed by using special operations over FODDs, and the algorithm is shown to converge to the optimal policy.


2016 ◽  
Vol 138 (6) ◽  
Author(s):  
Thai Duong ◽  
Duong Nguyen-Huu ◽  
Thinh Nguyen

Markov decision process (MDP) is a well-known framework for devising the optimal decision-making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not provide the expected performance. In this paper, we study the performance of the classic value iteration algorithm for solving an MDP problem under nonstationary environments. Specifically, the nonstationary environment is modeled as a sequence of time-variant transition probability matrices governed by an adiabatic evolution inspired from quantum mechanics. We characterize the performance of the value iteration algorithm subject to the rate of change of the underlying environment. The performance is measured in terms of the convergence rate to the optimal average reward. We show two examples of queuing systems that make use of our analysis framework.


Sign in / Sign up

Export Citation Format

Share Document