Search Control in MiniZinc

2020 ◽  
pp. 165-175
Author(s):  
Mark Wallace
Keyword(s):  
1993 ◽  
Author(s):  
Jihie Kim ◽  
Paul S. Rosenbloom
Keyword(s):  

Author(s):  
Yangchen Pan ◽  
Hengshuai Yao ◽  
Amir-massoud Farahmand ◽  
Martha White

Dyna is an architecture for model based reinforcement learning (RL), where simulated experience from a model is used to update policies or value functions. A key component of Dyna is search control, the mechanism to generate the state and action from which the agent queries the model, which remains largely unexplored. In this work, we propose to generate such states by using the trajectory obtained from Hill Climbing (HC) the current estimate of the value function. This has the effect of propagating value from high value regions and of preemptively updating value estimates of the regions that the agent is likely to visit next. We derive a noisy projected natural gradient algorithm for hill climbing, and highlight a connection to Langevin dynamics. We provide an empirical demonstration on four classical domains that our algorithm, HC Dyna, can obtain significant sample efficiency improvements. We study the properties of different sampling distributions for search control, and find that there appears to be a benefit specifically from using the samples generated by climbing on current value estimates from low value to high value region.


2006 ◽  
Vol 25 ◽  
pp. 17-74 ◽  
Author(s):  
S. Thiebaux ◽  
C. Gretton ◽  
J. Slaney ◽  
D. Price ◽  
F. Kabanza

A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed as properties of execution sequences rather than as properties of states, NMRDPs form a more natural model than the commonly adopted fully Markovian decision process (MDP) model. While the more tractable solution methods developed for MDPs do not directly apply in the presence of non-Markovian rewards, a number of solution methods for NMRDPs have been proposed in the literature. These all exploit a compact specification of the non-Markovian reward function in temporal logic, to automatically translate the NMRDP into an equivalent MDP which is solved using efficient MDP solution methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process Planner), a software platform for the development and experimentation of methods for decision-theoretic planning with non-Markovian rewards. The current version of NMRDPP implements, under a single interface, a family of methods based on existing as well as new approaches which we describe in detail. These include dynamic programming, heuristic search, and structured methods. Using NMRDPP, we compare the methods and identify certain problem features that affect their performance. NMRDPP's treatment of non-Markovian rewards is inspired by the treatment of domain-specific search control knowledge in the TLPlan planner, which it incorporates as a special case. In the First International Probabilistic Planning Competition, NMRDPP was able to compete and perform well in both the domain-independent and hand-coded tracks, using search control knowledge in the latter.


2016 ◽  
Vol 25 (04) ◽  
pp. 1650028
Author(s):  
Amol D. Mali ◽  
Minh Tang

A* search and its variants have been used in various fields for solving problems with large search spaces where state transitions occur because of application of operators. The key values in A* search are g(n) and h(n), where g(n) is the cost of the path from root (or start) node to node n, and h(n) is the estimated cost of cheapest path from n to goal. In this paper, we report on a space of variants of A* based on the following ideas: (i) using weighting functions for g(n) and h(n), (ii) evaluating different nodes with different heuristics, (iii) evaluating nodes with computationally cheap heuristics and re-evaluating some nodes with computationally expensive heuristics, and (iv) changing the size of the set of nodes from which the node to be expanded next is selected. We report on the bounds on the costs of solutions found by these variants of A*. We also report on the bounds for meta-variants of A* that invoke these variants sequentially. We show how the results can be used to obtain a more flexible search control without increasing the bound on the cost of the solution found by a variant or a meta-variant.


1998 ◽  
Vol 101 (1-2) ◽  
pp. 63-98 ◽  
Author(s):  
Christopher Leckie ◽  
Ingrid Zukerman

Sign in / Sign up

Export Citation Format

Share Document