scholarly journals Speeding Up Incomplete GDL-based Algorithms for Multi-agent Optimization with Dense Local Utilities

Author(s):  
Yanchen Deng ◽  
Bo An

Incomplete GDL-based algorithms including Max-sum and its variants are important methods for multi-agent optimization. However, they face a significant scalability challenge as the computational overhead grows exponentially with respect to the arity of each utility function. Generic Domain Pruning (GDP) technique reduces the computational effort by performing a one-shot pruning to filter out suboptimal entries. Unfortunately, GDP could perform poorly when dealing with dense local utilities and ties which widely exist in many domains. In this paper, we present several novel sorting-based acceleration algorithms by alleviating the effect of densely distributed local utilities. Specifically, instead of one-shot pruning in GDP, we propose to integrate both search and pruning to iteratively reduce the search space. Besides, we cope with the utility ties by organizing the search space of tied utilities into AND/OR trees to enable branch-and-bound. Finally, we propose a discretization mechanism to offer a tradeoff between the reconstruction overhead and the pruning efficiency. We demonstrate the superiorities of our algorithms over the state-of-the-art from both theoretical and experimental perspectives.

2020 ◽  
Vol 23 (4) ◽  
pp. 3095-3117
Author(s):  
Amjad Ullah ◽  
Jingpeng Li ◽  
Amir Hussain

Abstract The elasticity in cloud is essential to the effective management of computational resources as it enables readjustment at runtime to meet application demands. Over the years, researchers and practitioners have proposed many auto-scaling solutions using versatile techniques ranging from simple if-then-else based rules to sophisticated optimisation, control theory and machine learning based methods. However, despite an extensive range of existing elasticity research, the aim of implementing an efficient scaling technique that satisfies the actual demands is still a challenge to achieve. The existing methods suffer from issues like: (1) the lack of adaptability and static scaling behaviour whilst considering completely fixed approaches; (2) the burden of additional computational overhead, the inability to cope with the sudden changes in the workload behaviour and the preference of adaptability over reliability at runtime whilst considering the fully dynamic approaches; and (3) the lack of considering uncertainty aspects while designing auto-scaling solutions. In this paper, we aim to address these issues using a holistic biologically-inspired feedback switch controller. This method utilises multiple controllers and a switching mechanism, implemented using fuzzy system, that realises the selection of suitable controller at runtime. The fuzzy system also facilitates the design of qualitative elasticity rules. Furthermore, to improve the possibility of avoiding the oscillatory behaviour (a problem commonly associated with switch methodologies), this paper integrates a biologically-inspired computational model of action selection. Lastly, we identify seven different kinds of real workload patterns and utilise them to evaluate the performance of the proposed method against the state-of-the-art approaches. The obtained computational results demonstrate that the proposed method results in achieving better performance without incurring any additional cost in comparison to the state-of-the-art approaches.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 407 ◽  
Author(s):  
Dominik Weikert ◽  
Sebastian Mai ◽  
Sanaz Mostaghim

In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.


2005 ◽  
Vol 11 (3) ◽  
pp. 387-434 ◽  
Author(s):  
Liviu Panait ◽  
Sean Luke

Author(s):  
Hua Jiang ◽  
Dongming Zhu ◽  
Zhichao Xie ◽  
Shaowen Yao ◽  
Zhang-Hua Fu

Given an undirected graph, the Maximum k-plex Problem (MKP) is to find a largest induced subgraph in which each vertex has at most k−1 non-adjacent vertices. The problem arises in social network analysis and has found applications in many important areas employing graph-based data mining. Existing exact algorithms usually implement a branch-and-bound approach that requires a tight upper bound to reduce the search space. In this paper, we propose a new upper bound for MKP, which is a partitioning of the candidate vertex set with respect to the constructing solution. We implement a new branch-and-bound algorithm that employs the upper bound to reduce the number of branches. Experimental results show that the upper bound is very effective in reducing the search space. The new algorithm outperforms the state-of-the-art algorithms significantly on real-world massive graphs, DIMACS graphs and random graphs.


2015 ◽  
Author(s):  
John Wiedenhoeft ◽  
Eric Brugel ◽  
Alexander Schliep

AbstractBy combining Haar wavelets with Bayesian Hidden Markov Models, we improve detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. At the same time, we achieve drastically reduced running times, as the method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://bioinformatics.rutgers.edu/Software/HaMMLET/. The web supplement is at http://bioinformatics.rutgers.edu/Supplements/HaMMLET/.Author SummaryIdentifying large-scale genome deletions and duplications, or copy number variants (CNV), accurately in populations or individual patients is a crucial step in indicating disease factors or diagnosing an individual patient's disease type. Hidden Markov Models (HMM) are a type of statistical model widely used for CNV detection, as well as other biological applications such as the analysis of gene expression time course data or the analysis of discrete-valued DNA and protein sequences.As with many statistical models, there are two fundamentally different inference approaches. In the frequentist framework, a single estimate of the model parameters would be used as a basis for subsequent inference, making the identification of CNV dependent on the quality of that estimate. This is an acute problem for HMM as methods for finding globally optimal parameters are not known. Alternatively, one can use a Bayesian approach and integrate over all possible parameter choices. While the latter is known to lead to significantly better results, the much—up to hundreds of times—larger computational effort prevents wide adaptation so far.Our proposed method addresses this by combining Haar wavelets and HMM. We greatly accelerate fully Bayesian HMMs, while simultaneously increasing convergence and thus the accuracy of the Gibbs sampler used for Bayesian computations, leading to substantial improvements over the state-of-the-art.


2016 ◽  
Vol 31 (5) ◽  
pp. 417-428
Author(s):  
Oscar Sapena ◽  
Alejandro Torreño ◽  
Eva Onaindía

AbstractMost of the current top-performing planners are sequential planners that only handle total-order plans. Although this is a computationally efficient approach, the management of total-order plans restrict the choices of reasoning and thus the generation of flexible plans. In this paper, we present FLAP2, a forward-chaining planner that follows the principles of the classical POCL (Partial-Order Causal-Link Planning) paradigm. Working with partial-order plans allows FLAP2 to easily manage the parallelism of the plans, which brings several advantages: more flexible executions, shorter plan durations (makespan) and an easy adaptation to support new features like temporal or multi-agent planning. However, one of the limitations of POCL planners is that they require far more computational effort to deal with the interactions that arise among actions. FLAP2 minimizes this overhead by applying several techniques that improve its performance: the combination of different state-based heuristics and the use of parallel processes to diversify the search in different directions when a plateau is found. To evaluate the performance of FLAP2, we have made a comparison with four state-of-the-art planners: SGPlan, YAHSP2, Temporal Fast Downward and OPTIC. Experimental results show that FLAP2 presents a very acceptable trade-off between time and quality and a high coverage on the current planning benchmarks.


Acta Numerica ◽  
2013 ◽  
Vol 22 ◽  
pp. 1-131 ◽  
Author(s):  
Pietro Belotti ◽  
Christian Kirches ◽  
Sven Leyffer ◽  
Jeff Linderoth ◽  
James Luedtke ◽  
...  

Many optimal decision problems in scientific, engineering, and public sector applications involve both discrete decisions and nonlinear system dynamics that affect the quality of the final design or plan. These decision problems lead to mixed-integer nonlinear programming (MINLP) problems that combine the combinatorial difficulty of optimizing over discrete variable sets with the challenges of handling nonlinear functions. We review models and applications of MINLP, and survey the state of the art in methods for solving this challenging class of problems.Most solution methods for MINLP apply some form of tree search. We distinguish two broad classes of methods: single-tree and multitree methods. We discuss these two classes of methods first in the case where the underlying problem functions are convex. Classical single-tree methods include nonlinear branch-and-bound and branch-and-cut methods, while classical multitree methods include outer approximation and Benders decomposition. The most efficient class of methods for convex MINLP are hybrid methods that combine the strengths of both classes of classical techniques.Non-convex MINLPs pose additional challenges, because they contain non-convex functions in the objective function or the constraints; hence even when the integer variables are relaxed to be continuous, the feasible region is generally non-convex, resulting in many local minima. We discuss a range of approaches for tackling this challenging class of problems, including piecewise linear approximations, generic strategies for obtaining convex relaxations for non-convex functions, spatial branch-and-bound methods, and a small sample of techniques that exploit particular types of non-convex structures to obtain improved convex relaxations.We finish our survey with a brief discussion of three important aspects of MINLP. First, we review heuristic techniques that can obtain good feasible solution in situations where the search-tree has grown too large or we require real-time solutions. Second, we describe an emerging area of mixed-integer optimal control that adds systems of ordinary differential equations to MINLP. Third, we survey the state of the art in software for MINLP.


Author(s):  
Marlene Goncalves ◽  
María Esther Vidal

Criteria that induce a Skyline naturally represent user’s preference conditions useful to discard irrelevant data in large datasets. However, in the presence of high-dimensional Skyline spaces, the size of the Skyline can still be very large. To identify the best k points among the Skyline, the Top-k Skyline approach has been proposed. This chapter describes existing solutions and proposes to use the TKSI algorithm for the Top-k Skyline problem. TKSI reduces the search space by computing only a subset of the Skyline that is required to produce the top-k objects. In addition, the Skyline Frequency Metric is implemented to discriminate among the Skyline objects those that best meet the multidimensional criteria. This chapter’s authors have empirically studied the quality of TKSI, and their experimental results show the TKSI may be able to speed up the computation of the Top-k Skyline in at least 50% percent with regard to the state-of-the-art solutions.


2014 ◽  
Vol 23 (06) ◽  
pp. 1460028 ◽  
Author(s):  
Andres Calderon Jaramillo ◽  
Jicheng Fu ◽  
Vincent Ng ◽  
Farokh B. Bastani ◽  
I-Ling Yen

Recently, the state-of-the-art AI planners have significantly improved planning efficiency on Fully Observable Nondeterministic planning (FOND) problems with strong cyclic solutions. These strong cyclic solutions are guaranteed to achieve the goal if they terminate, implying that there is a possibility that they may run into indefinite loops. In contrast, strong solutions are guaranteed to achieve the goal, but few planners can effectively handle FOND problems with strong solutions. In this study, we aim to address this difficult, yet under-investigated class of planning problems: FOND planning problems with strong solutions. We present a planner that employs a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. Based on the characteristics of MRDAG, we develop heuristics to ensure planning towards the relevant search direction and design optimizations to prune the search space to further improve planning efficiency. We perform extensive experiments to evaluate MRDAG, the heuristics, and the optimizations for pruning the search space. Experimental results show that our strong algorithm achieves impressive performance on a variety of benchmark problems: on average it runs more than three orders of magnitude faster than the state-of-the-art planners, MBP and Gamer, while demonstrating significantly better scalability.


Sign in / Sign up

Export Citation Format

Share Document