Exact Model Counting of Query Expressions

2017 ◽  
Vol 42 (1) ◽  
pp. 1-46 ◽  
Author(s):  
Paul Beame ◽  
Jerry Li ◽  
Sudeepa Roy ◽  
Dan Suciu

Author(s):  
Shubham Sharma ◽  
Subhajit Roy ◽  
Mate Soos ◽  
Kuldeep S. Meel

Given a Boolean formula F, the problem of model counting, also referred to as #SAT, seeks to compute the number of solutions of F. Model counting is a fundamental problem with a wide variety of applications ranging from planning, quantified information flow to probabilistic reasoning and the like. The modern #SAT solvers tend to be either based on static decomposition, dynamic decomposition, or a hybrid of the two. Despite dynamic decomposition based #SAT solvers sharing much of their architecture with SAT solvers, the core design and heuristics of dynamic decomposition-based #SAT solvers has remained constant for over a decade. In this paper, we revisit the architecture of the state-of-the-art dynamic decomposition-based #SAT tool, sharpSAT, and demonstrate that by introducing a new notion of probabilistic component caching and the usage of universal hashing for exact model counting along with the development of several new heuristics can lead to significant performance improvement over state-of-the-art model-counters. In particular, we develop GANAK, a new scalable probabilistic exact model counter that outperforms state-of-the-art exact and approximate model counters sharpSAT and ApproxMC3 respectively, both in terms of PAR-2 score and the number of instances solved. Furthermore, in our experiments, the model count returned by GANAK was equal to the exact model count for all the benchmarks. Finally, we observe that recently proposed preprocessing techniques for model counting benefit exact model counters while hurting the performance of approximate model counters.



Author(s):  
Tian Sang ◽  
Paul Beame ◽  
Henry Kautz
Keyword(s):  






1997 ◽  
Vol 68 (1) ◽  
pp. 107-124 ◽  
Author(s):  
Osamu Yamanaka ◽  
Hiromitsu Ohmori ◽  
Akira Sano


Author(s):  
Paul Beame ◽  
Guy Van den Broeck ◽  
Eric Gribkoff ◽  
Dan Suciu


2010 ◽  
Vol 18 (3) ◽  
pp. 451-489 ◽  
Author(s):  
Tatsuya Motoki

As practitioners we are interested in the likelihood of the population containing a copy of the optimum. The dynamic systems approach, however, does not help us to calculate that quantity. Markov chain analysis can be used in principle to calculate the quantity. However, since the associated transition matrices are enormous even for modest problems, it follows that in practice these calculations are usually computationally infeasible. Therefore, some improvements on this situation are desirable. In this paper, we present a method for modeling the behavior of finite population evolutionary algorithms (EAs), and show that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model. We also present some approximate models that use still less memory space than the exact model. Furthermore, based on our models, we examine the selection pressure by fitness-proportionate selection, and observe that on average over all population trajectories, there is no such strong bias toward selecting the higher fitness individuals as the fitness landscape suggests.



Sign in / Sign up

Export Citation Format

Share Document