Fast Strong Planning for FOND Problems with Multi-Root Directed Acyclic Graphs

2014 ◽  
Vol 23 (06) ◽  
pp. 1460028 ◽  
Author(s):  
Andres Calderon Jaramillo ◽  
Jicheng Fu ◽  
Vincent Ng ◽  
Farokh B. Bastani ◽  
I-Ling Yen

Recently, the state-of-the-art AI planners have significantly improved planning efficiency on Fully Observable Nondeterministic planning (FOND) problems with strong cyclic solutions. These strong cyclic solutions are guaranteed to achieve the goal if they terminate, implying that there is a possibility that they may run into indefinite loops. In contrast, strong solutions are guaranteed to achieve the goal, but few planners can effectively handle FOND problems with strong solutions. In this study, we aim to address this difficult, yet under-investigated class of planning problems: FOND planning problems with strong solutions. We present a planner that employs a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. Based on the characteristics of MRDAG, we develop heuristics to ensure planning towards the relevant search direction and design optimizations to prune the search space to further improve planning efficiency. We perform extensive experiments to evaluate MRDAG, the heuristics, and the optimizations for pruning the search space. Experimental results show that our strong algorithm achieves impressive performance on a variety of benchmark problems: on average it runs more than three orders of magnitude faster than the state-of-the-art planners, MBP and Gamer, while demonstrating significantly better scalability.

Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 407 ◽  
Author(s):  
Dominik Weikert ◽  
Sebastian Mai ◽  
Sanaz Mostaghim

In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.


Author(s):  
Rung-Tzuo Liaw ◽  
Chuan-Kang Ting

Evolutionary multitasking is a significant emerging search paradigm that utilizes evolutionary algorithms to concurrently optimize multiple tasks. The multi-factorial evolutionary algorithm renders an effectual realization of evolutionary multitasking on two or three tasks. However, there remains room for improvement on the performance and capability of evolutionary multitasking. Beyond three tasks, this paper proposes a novel framework, called the symbiosis in biocoenosis optimization (SBO), to address evolutionary many-tasking optimization. The SBO leverages the notion of symbiosis in biocoenosis for transferring information and knowledge among different tasks through three major components: 1) transferring information through inter-task individual replacement, 2) measuring symbiosis through intertask paired evaluations, and 3) coordinating the frequency and quantity of transfer based on symbiosis in biocoenosis. The inter-task individual replacement with paired evaluations caters for estimation of symbiosis, while the symbiosis in biocoenosis provides a good estimator of transfer. This study examines the effectiveness and efficiency of the SBO on a suite of many-tasking benchmark problems, designed to deal with 30 tasks simultaneously. The experimental results show that SBO leads to better solutions and faster convergence than the state-of-the-art evolutionary multitasking algorithms. Moreover, the results indicate that SBO is highly capable of identifying the similarity between problems and transferring information appropriately.


Author(s):  
Zhijian Luo ◽  
Siyu Chen ◽  
Yuntao Qian

In blind image deconvolution, priors are often leveraged to constrain the solution space, so as to alleviate the under-determinacy. Priors which are trained separately from the task of deconvolution tend to be unstable. We propose the Golf Optimizer, a novel but simple form of network that learns deep priors from data with better propagation behavior. Like playing golf, our method first estimates an aggressive propagation towards optimum using one network, and recurrently applies a residual CNN to learn the gradient of prior for delicate correction on restoration. Experiments show that our network achieves competitive performance on GoPro dataset, and our model is extremely lightweight compared with the state-of-the-art works.


Author(s):  
Yanchen Deng ◽  
Bo An

Incomplete GDL-based algorithms including Max-sum and its variants are important methods for multi-agent optimization. However, they face a significant scalability challenge as the computational overhead grows exponentially with respect to the arity of each utility function. Generic Domain Pruning (GDP) technique reduces the computational effort by performing a one-shot pruning to filter out suboptimal entries. Unfortunately, GDP could perform poorly when dealing with dense local utilities and ties which widely exist in many domains. In this paper, we present several novel sorting-based acceleration algorithms by alleviating the effect of densely distributed local utilities. Specifically, instead of one-shot pruning in GDP, we propose to integrate both search and pruning to iteratively reduce the search space. Besides, we cope with the utility ties by organizing the search space of tied utilities into AND/OR trees to enable branch-and-bound. Finally, we propose a discretization mechanism to offer a tradeoff between the reconstruction overhead and the pruning efficiency. We demonstrate the superiorities of our algorithms over the state-of-the-art from both theoretical and experimental perspectives.


Author(s):  
Marlene Goncalves ◽  
María Esther Vidal

Criteria that induce a Skyline naturally represent user’s preference conditions useful to discard irrelevant data in large datasets. However, in the presence of high-dimensional Skyline spaces, the size of the Skyline can still be very large. To identify the best k points among the Skyline, the Top-k Skyline approach has been proposed. This chapter describes existing solutions and proposes to use the TKSI algorithm for the Top-k Skyline problem. TKSI reduces the search space by computing only a subset of the Skyline that is required to produce the top-k objects. In addition, the Skyline Frequency Metric is implemented to discriminate among the Skyline objects those that best meet the multidimensional criteria. This chapter’s authors have empirically studied the quality of TKSI, and their experimental results show the TKSI may be able to speed up the computation of the Top-k Skyline in at least 50% percent with regard to the state-of-the-art solutions.


Author(s):  
Cunjing Ge ◽  
Feifei Ma ◽  
Xutong Ma ◽  
Fan Zhang ◽  
Pei Huang ◽  
...  

Solution counting or solution space quantification (means volume computation and volume estimation) for linear constraints (LCs) has found interesting applications in various fields. Experimental data shows that integer solution counting is usually more expensive than quantifying volume of solution space while their output values are close. So it is helpful to approximate the number of integer solutions by the volume if the error is acceptable. In this paper, we present and prove a bound of such error for LCs. It is the first bound that can be used to approximate the integer solution counts. Based on this result, an approximate integer solution counting method for LCs is proposed. Experiments show that our approach is over 20x faster than the state-of-the-art integer solution counters. Moreover, such advantage increases with the problem scale.


2012 ◽  
Vol 45 ◽  
pp. 565-600 ◽  
Author(s):  
R. I. Brafman ◽  
G. Shani

Replanning via determinization is a recent, popular approach for online planning in MDPs. In this paper we adapt this idea to classical, non-stochastic domains with partial information and sensing actions, presenting a new planner: SDR (Sample, Determinize, Replan). At each step we generate a solution plan to a classical planning problem induced by the original problem. We execute this plan as long as it is safe to do so. When this is no longer the case, we replan. The classical planning problem we generate is based on the translation-based approach for conformant planning introduced by Palacios and Geffner. The state of the classical planning problem generated in this approach captures the belief state of the agent in the original problem. Unfortunately, when this method is applied to planning problems with sensing, it yields a non-deterministic planning problem that is typically very large. Our main contribution is the introduction of state sampling techniques for overcoming these two problems. In addition, we introduce a novel, lazy, regression-based method for querying the agent's belief state during run-time. We provide a comprehensive experimental evaluation of the planner, showing that it scales better than the state-of-the-art CLG planner on existing benchmark problems, but also highlighting its weaknesses with new domains. We also discuss its theoretical guarantees.


Author(s):  
Feng Wu ◽  
Shlomo Zilberstein ◽  
Xiaoping Chen

We propose a novel baseline regret minimization algorithm for multi-agent planning problems modeled as finite-horizon decentralized POMDPs. It guarantees to produce a policy that is provably better than or at least equivalent to the baseline policy. We also propose an iterative belief generation algorithm to effectively and efficiently minimize the baseline regret, which only requires necessary iterations to converge to the policy with minimum baseline regret. Experimental results on common benchmark problems confirm its advantage comparing to the state-of-the-art approaches.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Giovanni Micale ◽  
Giorgio Locicero ◽  
Alfredo Pulvirenti ◽  
Alfredo Ferro

AbstractTemporal networks are graphs where each edge is associated with a timestamp denoting when two nodes interact. Temporal Subgraph Isomorphism (TSI) aims at retrieving all the subgraphs of a temporal network (called target) matching a smaller temporal network (called query), such that matched target edges appear in the same chronological order of corresponding query edges. Few algorithms have been proposed to solve the TSI problem (or variants of it) and most of them are applicable only to small or specific queries. In this paper we present TemporalRI, a new subgraph isomorphism algorithm for temporal networks with multiple contacts between nodes, which is inspired by RI algorithm. TemporalRI introduces the notion of temporal flows and uses them to filter the search space of candidate nodes for the matching. Our algorithm can handle queries of any size and any topology. Experiments on real networks of different sizes show that TemporalRI is very efficient compared to the state-of-the-art, especially for large queries and targets.


Author(s):  
Remi van der Laan ◽  
Leonardo Scandolo ◽  
Elmar Eisemann

Sparse Voxel Directed Acyclic Graphs (SVDAGs) losslessly compress highly detailed geometry in a high-resolution binary voxel grid by identifying matching elements. This representation is suitable for high-performance real-time applications, such as free-viewpoint videos and high-resolution precomputed shadows. In this work, we introduce a lossy scheme to further decrease memory consumption by minimally modifying the underlying voxel grid to increase matches. Our method efficiently identifies groups of similar but rare subtrees in an SVDAG structure and replaces them with a single common subtree representative. We test our compression strategy on several standard voxel datasets, where we obtain memory reductions of 10% up to 50% compared to a standard SVDAG, while introducing an error (ratio of modified voxels to voxel count) of only 1% to 5%. Furthermore, we show that our method is complementary to other state of the art SVDAG optimizations, and has a negligible effect on real-time rendering performance.


Sign in / Sign up

Export Citation Format

Share Document