scholarly journals A New Probabilistic Algorithm for Approximate Model Counting

Author(s):  
Cunjing Ge ◽  
Feifei Ma ◽  
Tian Liu ◽  
Jian Zhang ◽  
Xutong Ma
Author(s):  
Shubham Sharma ◽  
Subhajit Roy ◽  
Mate Soos ◽  
Kuldeep S. Meel

Given a Boolean formula F, the problem of model counting, also referred to as #SAT, seeks to compute the number of solutions of F. Model counting is a fundamental problem with a wide variety of applications ranging from planning, quantified information flow to probabilistic reasoning and the like. The modern #SAT solvers tend to be either based on static decomposition, dynamic decomposition, or a hybrid of the two. Despite dynamic decomposition based #SAT solvers sharing much of their architecture with SAT solvers, the core design and heuristics of dynamic decomposition-based #SAT solvers has remained constant for over a decade. In this paper, we revisit the architecture of the state-of-the-art dynamic decomposition-based #SAT tool, sharpSAT, and demonstrate that by introducing a new notion of probabilistic component caching and the usage of universal hashing for exact model counting along with the development of several new heuristics can lead to significant performance improvement over state-of-the-art model-counters. In particular, we develop GANAK, a new scalable probabilistic exact model counter that outperforms state-of-the-art exact and approximate model counters sharpSAT and ApproxMC3 respectively, both in terms of PAR-2 score and the number of instances solved. Furthermore, in our experiments, the model count returned by GANAK was equal to the exact model count for all the benchmarks. Finally, we observe that recently proposed preprocessing techniques for model counting benefit exact model counters while hurting the performance of approximate model counters.


2020 ◽  
Vol 34 (04) ◽  
pp. 3097-3104
Author(s):  
Ralph Abboud ◽  
Ismail Ceylan ◽  
Thomas Lukasiewicz

Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.


Author(s):  
Timothy van Bremen ◽  
Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count of a sentence given an unweighted first-order model counting oracle. The algorithm has applications to inference in a variety of first-order probabilistic representations, such as Markov logic networks and probabilistic logic programs. Crucially for many applications, no assumptions are made on the form of the input sentence. Instead, the algorithm makes use of the symmetry inherent in the problem by imposing cardinality constraints on the number of possible true groundings of a sentence's literals. Realising the first-order model counting oracle in practice using the approximate hashing-based model counter ApproxMC3, we show how our algorithm is competitive with existing approximate and exact techniques for inference in first-order probabilistic models. We additionally provide PAC guarantees on the accuracy of the bounds generated.


Author(s):  
Supratik Chakraborty ◽  
Kuldeep S. Meel ◽  
Moshe Y. Vardi

Model counting, or counting solutions of a set of constraints, is a fundamental problem in Computer Science with diverse applications. Since exact counting is computationally hard (#P complete), approximate counting techniques have received much attention over the past few decades. In this chapter, we focus on counting models of propositional formulas, and discuss in detail universal-hashing based approximate counting, which has emerged as the predominant paradigm for state-of-the-art approximate model counters. These counters are randomized algorithms that exploit properties of universal hash functions to provide rigorous approximation guarantees, while piggybacking on impressive advances in propositional satisfiability solving to scale up to problem instances with a million variables. We elaborate on various choices in designing such approximate counters and the implications of these choices. We also discuss variants of approximate model counting, such as DNF counting and weighted counting.


2021 ◽  
Author(s):  
Mate Soos ◽  
Kuldeep S. Meel

Given a set of constraints F and a weight function W over the assignments, the problem of MaxSAT is to compute a maximum weighted solution of F. MaxSAT is a fundamental problem with applications in numerous areas. The success of MaxSAT solvers has prompted researchers in AI and formal methods communities to develop algorithms that can use MaxSAT solver as oracle. One such problem that stands to benefit from advances in MaxSAT solving is discrete integration. Recently, Ermon et al. achieved a significant breakthrough by reducing the problem of integration to polynomially many queries to an optimization oracle where $F$ is conjuncted with randomly chosen XOR constraints. Unlike approximate model counting, where hashing-based approaches have been able to achieve scalability as well as rigorous formal guarantees, the practical evaluation of Ermon et al's approach, called WISH, often sacrifice theoretical guarantees, largely due to lack of existing MaxSAT solvers with native XOR support. The primary contribution of this paper is a new MaxSAT solver, GaussMaxHS, with built-in XOR support. The architecture of GaussMaxHS is inspired by CryptoMiniSAT, which has been the workhorse of hashing-based approximate model counting techniques. The resulting solver, GaussMaxHS, outperforms MaxHS over 9628 benchmarks arising from spin glass models and network reliability domains. In particular, with a timeout of 5000 seconds, MaxHS could solve only 5473 benchmarks while GaussMaxHS could solve 6120 benchmarks.


Sign in / Sign up

Export Citation Format

Share Document