scholarly journals Breadth First Search and Inference Methods Integration to satisfy table constraints

2021 ◽  
Vol 11 (4) ◽  
pp. 521-532
Author(s):  
A.A. Zuenko ◽  

Within the Constraint Programming technology, so-called table constraints such as typical tables, compressed tables, smart tables, segmented tables, etc, are widely used. They can be used to represent any other types of constraints, and algorithms of the table constraint propagation (logical inference on constraints) allow eliminating a lot of "redundant" values from the domains of variables, while having low computational complexity. In the previous studies, the author proposed to divide smart tables into structures of C- and D-types. The generally accepted methodology for solving con-straint satisfaction problems is the combined application of constraint propagation methods and backtracking depth-first search methods. In the study, it is proposed to integrate breadth-first search methods and author`s method of table con-straint propagation. D-type smart tables are proposed to be represented as a join of several orthogonalized C-type smart tables. The search step is to select a pair of C-type smart tables to be joined and then propagate the restrictions. To de-termine the order of joining orthogonalized smart tables at each step of the search, a specialized heuristic is used, which reduces the search space, taking into account further calculations. When the restrictions are extended, the acceleration of the computation process is achieved by applying the developed reduction rules for the case of C-type smart tables. The developed hybrid method allows one to find all solutions to the problems of satisfying constraints modeled using one or several D-type smart tables, without decomposing tabular constraints into elementary tuples.

2018 ◽  
Vol 27 (04) ◽  
pp. 1860002 ◽  
Author(s):  
Minas Dasygenis ◽  
Kostas Stergiou

Constraint programming (CP) is a powerful paradigm for various types of hard combinatorial problems. Constraint propagation techniques, such as arc consistency (AC), are used within solvers to prune inconsistent values from the domains of the variables and narrow down the search space. Local consistencies stronger than AC have the potential to prune the search space even more, but they are not widely used because they incur a high run time penalty in cases where they are unsuccessful. All constraint propagation techniques are sequential by nature, and thus they cannot be scaled up to modern multicore machines. For this reason, research on parallelizing constraint propagation is very limited. Contributing towards this direction, we exploit the parallelization possibilities of modern CPUs in tandem with strong local propagation methods in a novel way. Instead of trying to parallelize constraint propagation algorithms, we propose two search algorithms that apply different propagation methods in parallel. Both algorithms consist of a master search process, which is a typical CP solver, and a number of slave processes, with each one implementing a strong propagation method. The first algorithm runs the different propagators synchronously at each node of the search tree explored in the master process, while the second one can run them asynchronously at different nodes of the search tree. Preliminary experimental results on well-established benchmarks display the promise of our research by illustrating that our algorithms have execution times equal to those of serial solvers, in the worst case, while being faster in most cases.


2020 ◽  
Vol 11 ◽  
Author(s):  
Shuhei Kimura ◽  
Ryo Fukutomi ◽  
Masato Tokuhisa ◽  
Mariko Okada

Several researchers have focused on random-forest-based inference methods because of their excellent performance. Some of these inference methods also have a useful ability to analyze both time-series and static gene expression data. However, they are only of use in ranking all of the candidate regulations by assigning them confidence values. None have been capable of detecting the regulations that actually affect a gene of interest. In this study, we propose a method to remove unpromising candidate regulations by combining the random-forest-based inference method with a series of feature selection methods. In addition to detecting unpromising regulations, our proposed method uses outputs from the feature selection methods to adjust the confidence values of all of the candidate regulations that have been computed by the random-forest-based inference method. Numerical experiments showed that the combined application with the feature selection methods improved the performance of the random-forest-based inference method on 99 of the 100 trials performed on the artificial problems. However, the improvement tends to be small, since our combined method succeeded in removing only 19% of the candidate regulations at most. The combined application with the feature selection methods moreover makes the computational cost higher. While a bigger improvement at a lower computational cost would be ideal, we see no impediments to our investigation, given that our aim is to extract as much useful information as possible from a limited amount of gene expression data.


2020 ◽  
Author(s):  
Fulei Ji ◽  
Wentao Zhang ◽  
Tianyou Ding

Abstract Automatic search methods have been widely used for cryptanalysis of block ciphers, especially for the most classic cryptanalysis methods—differential and linear cryptanalysis. However, the automatic search methods, no matter based on MILP, SMT/SAT or CP techniques, can be inefficient when the search space is too large. In this paper, we propose three new methods to improve Matsui’s branch-and-bound search algorithm, which is known as the first generic algorithm for finding the best differential and linear trails. The three methods, named reconstructing DDT and LAT according to weight, executing linear layer operations in minimal cost and merging two 4-bit S-boxes into one 8-bit S-box, respectively, can efficiently speed up the search process by reducing the search space as much as possible and reducing the cost of executing linear layer operations. We apply our improved algorithm to DESL and GIFT, which are still the hard instances for the automatic search methods. As a result, we find the best differential trails for DESL (up to 14-round) and GIFT-128 (up to 19-round). The best linear trails for DESL (up to 16-round), GIFT-128 (up to 10-round) and GIFT-64 (up to 15-round) are also found. To the best of our knowledge, these security bounds for DESL and GIFT under single-key scenario are given for the first time. Meanwhile, it is the longest exploitable (differential or linear) trails for DESL and GIFT. Furthermore, benefiting from the efficiency of the improved algorithm, we do experiments to demonstrate that the clustering effect of differential trails for 13-round DES and DESL are both weak.


2008 ◽  
Vol 17 (02) ◽  
pp. 303-320 ◽  
Author(s):  
WEI SONG ◽  
BINGRU YANG ◽  
ZHANGYAN XU

Because of the inherent computational complexity, mining the complete frequent item-set in dense datasets remains to be a challenging task. Mining Maximal Frequent Item-set (MFI) is an alternative to address the problem. Set-Enumeration Tree (SET) is a common data structure used in several MFI mining algorithms. For this kind of algorithm, the process of mining MFI's can also be viewed as the process of searching in set-enumeration tree. To reduce the search space, in this paper, a new algorithm, Index-MaxMiner, for mining MFI is proposed by employing a hybrid search strategy blending breadth-first and depth-first. Firstly, the index array is proposed, and based on bitmap, an algorithm for computing index array is presented. By adding subsume index to frequent items, Index-MaxMiner discovers the candidate MFI's using breadth-first search at one time, which avoids first-level nodes that would not participate in the answer set and reduces drastically the number of candidate itemsets. Then, for candidate MFI's, depth-first search strategy is used to generate all MFI's. Thus, the jumping search in SET is implemented, and the search space is reduced greatly. The experimental results show that the proposed algorithm is efficient especially for dense datasets.


2013 ◽  
Vol 300-301 ◽  
pp. 645-648 ◽  
Author(s):  
Yung Chien Lin

Evolutionary algorithms (EAs) are population-based global search methods. Memetic Algorithms (MAs) are hybrid EAs that combine genetic operators with local search methods. With global exploration and local exploitation in search space, MAs are capable of obtaining more high-quality solutions. On the other hand, mixed-integer hybrid differential evolution (MIHDE), as an EA-based search algorithm, has been successfully applied to many mixed-integer optimization problems. In this paper, a mixed-integer memetic algorithm based on MIHDE is developed for solving mixed-integer constrained optimization problems. The proposed algorithm is implemented and applied to the optimal design of batch processes. Experimental results show that the proposed algorithm can find a better optimal solution compared with some other search algorithms.


2015 ◽  
Vol 23 (1) ◽  
pp. 101-129 ◽  
Author(s):  
Antonios Liapis ◽  
Georgios N. Yannakakis ◽  
Julian Togelius

Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.


Author(s):  
SAMIRA SADAOUI ◽  
MALEK MOUHOUB ◽  
BO CHEN

Simulation of complex Lotos specifications is not always efficient due to the space explosion problem of their corresponding transition systems. To overcome this difficulty in practice, we present in this paper a novel approach which integrates constraint propagation techniques into the Lotos specifications. These solving techniques are used to reduce the size of the search space before and during the search for a solution to a given combinatorial problem under constraints. In order to do that, we first tackle the challenging task of describing combinatorial problems in Lotos using the Constraint Satisfaction Problem (CSP) framework. In this regard, we provide two generic Lotos templates for describing CSPs and temporal CSPs (CSPs involving temporal constraints). To evaluate the time performance of the framework we propose, we have conducted several experimental tests on instances of the N-Queens, the machine scheduling and randomly generated CSPs. The results of these experiments are promising and demonstrate the efficiency of Lotos simulation when CSP techniques are integrated.


2013 ◽  
Vol 11 (06) ◽  
pp. 1343007 ◽  
Author(s):  
YANG ZHAO ◽  
MORIHIRO HAYASHIDA ◽  
JIRA JINDALERTUDOMDEE ◽  
HIROSHI NAGAMOCHI ◽  
TATSUYA AKUTSU

Molecular enumeration plays a basic role in the design of drugs, which has been studied by mathematicians, computer scientists, and chemists for quite a long time. Although many researchers are involved in developing enumeration algorithms specific to drug design systems, molecular enumeration is still a hard problem to date due to its exponentially increasing large search space with larger number of atoms. To alleviate this defect, we propose efficient algorithms, BfsSimEnum and BfsMulEnum to enumerate tree-like molecules without and with multiple bonds, respectively, where chemical compounds are represented as molecular graphs. In order to reduce the large search space, we adjust some important concepts such as left-heavy, center-rooted, and normal form to molecular tree graphs. Different from many existing approaches, BfsSimEnum and BfsMulEnum firstly enumerate tree-like compounds by breadth-first search order. Computational experiments are performed to compare with several existing methods. The results suggest that our proposed methods are exact and more efficient.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 833
Author(s):  
Veera Boonjing ◽  
Pisit Chanvarasuth

This paper formulates the problem of determining all reducts of an information system as a graph search problem. The search space is represented in the form of a rooted graph. The proposed algorithm uses a breadth-first search strategy to search for all reducts starting from the graph root. It expands nodes in breadth-first order and uses a pruning rule to decrease the search space. It is mathematically shown that the proposed algorithm is both time and space efficient.


2001 ◽  
Vol 16 (1) ◽  
pp. 69-84 ◽  
Author(s):  
STEPHEN J. WESTFOLD ◽  
DOUGLAS R. SMITH

In this paper we describe the framework we have developed in KIDS (Kestrel Interactive Development System) for generating efficient constraint satisfaction programs. We have used KIDS to synthesise global search scheduling programs that have proved to be dramatically faster than other programs running the same data. We focus on the underlying ideas that lead to this efficiency. The key to the efficiency is the reduction of the size of the search space by an effective representation of sets of possible solutions (solution spaces) that allows efficient constraint propagation and pruning at the level of solution spaces. Moving to a solution space representation involves a problem reformulation. Having found a solution to the reformulated problem, an extraction phase extracts solutions to the original problem. We show how constraints from the original problem can be automatically reformulated and specialised in order to derive efficient propagation code automatically. Our solution methods exploit the semi-lattice structure of our solution spaces.


Sign in / Sign up

Export Citation Format

Share Document