A ‘Big Win’: AI and Algorithms for Combinatorial Search

Keyword(s):  
2009 ◽  
pp. 738-741
Author(s):  
Surajit Chaudhuri ◽  
Vivek Narasayya ◽  
Gerhard Weikum

2018 ◽  
pp. 275-294
Author(s):  
Dániel Gerbner ◽  
Balázs Patkós

Author(s):  
Hang Ma ◽  
Glenn Wagner ◽  
Ariel Felner ◽  
Jiaoyang Li ◽  
T. K. Satish Kumar ◽  
...  

We formalize Multi-Agent Path Finding with Deadlines (MAPF-DL). The objective is to maximize the number of agents that can reach their given goal vertices from their given start vertices within the deadline, without colliding with each other. We first show that MAPF-DL is NP-hard to solve optimally. We then present two classes of optimal algorithms, one based on a reduction of MAPF-DL to a flow problem and a subsequent compact integer linear programming formulation of the resulting reduced abstracted multi-commodity flow network and the other one based on novel combinatorial search algorithms. Our empirical results demonstrate that these MAPF-DL solvers scale well and each one dominates the other ones in different scenarios.


2020 ◽  
Vol 17 (3) ◽  
pp. 983-1006
Author(s):  
M. Kopecky ◽  
P. Vojtas

Our customer preference model is based on aggregation of partly linear relaxations of value filters often used in e-commerce applications. Relaxation is motivated by the Analytic Hierarchy Processing method and combining fuzzy information in web accessible databases. In low dimensions our method is well suited also for data visualization. The process of translating models (user behavior) to programs (learned recommendation) is formalized by Challenge-Response Framework ChRF. ChRF resembles remote process call and reduction in combinatorial search problems. In our case, the model is automatically translated to a program using spatial database features. This enables us to define new metrics with visual motivation. We extend the conference paper with inductive ChRF, new representation of user and an additional method and metric. We provide experiments with synthetic data (items) and users.


1997 ◽  
pp. 323-326 ◽  
Author(s):  
Richard J. Balling
Keyword(s):  

Author(s):  
Mohammad Al Hasan

The research on mining interesting patterns from transactions or scientific datasets has matured over the last two decades. At present, numerous algorithms exist to mine patterns of variable complexities, such as set, sequence, tree, graph, etc. Collectively, they are referred as Frequent Pattern Mining (FPM) algorithms. FPM is useful in most of the prominent knowledge discovery tasks, like classification, clustering, outlier detection, etc. They can be further used, in database tasks, like indexing and hashing while storing a large collection of patterns. But, the usage of FPM in real-life knowledge discovery systems is considerably low in comparison to their potential. The prime reason is the lack of interpretability caused from the enormity of the output-set size. For instance, a moderate size graph dataset with merely thousand graphs can produce millions of frequent graph patterns with a reasonable support value. This is expected due to the combinatorial search space of pattern mining. However, classification, clustering, and other similar Knowledge discovery tasks should not use that many patterns as their knowledge nuggets (features), as it would increase the time and memory complexity of the system. Moreover, it can cause a deterioration of the task quality because of the popular “curse of dimensionality” effect. So, in recent years, researchers felt the need to summarize the output set of FPM algorithms, so that the summary-set is small, non-redundant and discriminative. There are different summarization techniques: lossless, profile-based, cluster-based, statistical, etc. In this article, we like to overview the main concept of these summarization techniques, with a comparative discussion of their strength, weakness, applicability and computation cost.


Author(s):  
Tad Hogg

Phase transitions have long been studied empirically in various combinatorial searches and theoretically in simplified models [91, 264, 301, 490]. The analogy with statistical physics [397], explored throughout this volume, shows how the many local choices made during search relate to global properties such as the resulting search cost. These studies have led to a better understanding of typical search behaviors [514] and improved search methods [195, 247, 261, 432, 433]. Among the current research questions in this field are the range of algorithms exhibiting the transition behavior and the algorithm-independent problem properties associated with the difficult instances concentrated near the transition. Towards this end, the present chapter examines quantum computer [123, 126, 158, 486] algorithms for nondeterministic polynomial (NP) combinatorial search problems [191]. As with many conventional methods, they exhibit the easy-hard-easy pattern of computational cost as the degree of constraint in the problems varies. We describe how properties of the search space affect the algorithms and identify an additional structural property, the energy gap, motivated by one quantum algorithm but applicable to a variety of techniques, both quantum and classical. Thus, the study of quantum search algorithms not only extends the range of algorithms exhibiting phase transitions, but also helps identify underlying structural properties. Specifically, the next two sections describe a class of hard search problems and the form of quantum search algorithms proposed to date. The remainder of the chapter presents algorithm behaviors, relevant problem structure, arid an approximate asymptotic analysis of their cost scaling. The final section discusses various open issues in designing and evaluating quantum algorithms, and relating their behavior to problem structure. The k-satisfiability (k -SAT) problem, as discussed earlier in this volume, consists of n Boolean variables and m clauses. A clause is a logical OR of k variables, each of which may be negated. A solution is an assignment, that is, a value for each variable, TRUE or FALSE, satisfying all the clauses. An assignment is said to conflict with any clause it does not satisfy.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 571-588 ◽  
Author(s):  
TOBIAS KAMINSKI ◽  
THOMAS EITER ◽  
KATSUMI INOUE

AbstractMeta-Interpretive Learning (MIL) learns logic programs from examples by instantiating meta-rules, which is implemented by the Metagol system based on Prolog. Viewing MIL-problems as combinatorial search problems, they can alternatively be solved by employing Answer Set Programming (ASP), which may result in performance gains as a result of efficient conflict propagation. However, a straightforward ASP-encoding of MIL results in a huge search space due to a lack of procedural bias and the need for grounding. To address these challenging issues, we encode MIL in the HEX-formalism, which is an extension of ASP that allows us to outsource the background knowledge, and we restrict the search space to compensate for a procedural bias in ASP. This way, the import of constants from the background knowledge can for a given type of meta-rules be limited to relevant ones. Moreover, by abstracting from term manipulations in the encoding and by exploiting the HEX interface mechanism, the import of such constants can be entirely avoided in order to mitigate the grounding bottleneck. An experimental evaluation shows promising results.


Sign in / Sign up

Export Citation Format

Share Document