Approximate Model Counting via Extension Rule

Author(s):  
Jinyan Wang ◽  
Minghao Yin ◽  
Jingli Wu
Author(s):  
Cunjing Ge ◽  
Feifei Ma ◽  
Tian Liu ◽  
Jian Zhang ◽  
Xutong Ma

Author(s):  
Shubham Sharma ◽  
Subhajit Roy ◽  
Mate Soos ◽  
Kuldeep S. Meel

Given a Boolean formula F, the problem of model counting, also referred to as #SAT, seeks to compute the number of solutions of F. Model counting is a fundamental problem with a wide variety of applications ranging from planning, quantified information flow to probabilistic reasoning and the like. The modern #SAT solvers tend to be either based on static decomposition, dynamic decomposition, or a hybrid of the two. Despite dynamic decomposition based #SAT solvers sharing much of their architecture with SAT solvers, the core design and heuristics of dynamic decomposition-based #SAT solvers has remained constant for over a decade. In this paper, we revisit the architecture of the state-of-the-art dynamic decomposition-based #SAT tool, sharpSAT, and demonstrate that by introducing a new notion of probabilistic component caching and the usage of universal hashing for exact model counting along with the development of several new heuristics can lead to significant performance improvement over state-of-the-art model-counters. In particular, we develop GANAK, a new scalable probabilistic exact model counter that outperforms state-of-the-art exact and approximate model counters sharpSAT and ApproxMC3 respectively, both in terms of PAR-2 score and the number of instances solved. Furthermore, in our experiments, the model count returned by GANAK was equal to the exact model count for all the benchmarks. Finally, we observe that recently proposed preprocessing techniques for model counting benefit exact model counters while hurting the performance of approximate model counters.


2010 ◽  
Vol 108-111 ◽  
pp. 268-273 ◽  
Author(s):  
Jun Ping Zhou ◽  
Chun Guang Zhou ◽  
Ming Hao Yin ◽  
Hui Yang

Extension rule is a new method for computing the number of models for a given propositional formula. In some sense, it is actually an inverse propositonal resolution. In order to improve counting performance, we introduce some reasoning rules into extension rule based model counting and present a new algorithm RCER which combines the extension rule and the reasoning rule together. The experiment results show that the algorithm not only occupies less space but also increases the efficiency for solving model counting.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 41042-41049
Author(s):  
Naiyu Tian ◽  
Dantong Ouyang ◽  
Fengyu Jia ◽  
Meng Liu ◽  
Liming Zhang

2020 ◽  
Vol 34 (04) ◽  
pp. 3097-3104
Author(s):  
Ralph Abboud ◽  
Ismail Ceylan ◽  
Thomas Lukasiewicz

Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.


2010 ◽  
Vol 5 (7) ◽  
pp. 49-56
Author(s):  
Youjun Xu ◽  
Dantong Ouyang ◽  
Yuxin Ye

Author(s):  
Timothy van Bremen ◽  
Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count of a sentence given an unweighted first-order model counting oracle. The algorithm has applications to inference in a variety of first-order probabilistic representations, such as Markov logic networks and probabilistic logic programs. Crucially for many applications, no assumptions are made on the form of the input sentence. Instead, the algorithm makes use of the symmetry inherent in the problem by imposing cardinality constraints on the number of possible true groundings of a sentence's literals. Realising the first-order model counting oracle in practice using the approximate hashing-based model counter ApproxMC3, we show how our algorithm is competitive with existing approximate and exact techniques for inference in first-order probabilistic models. We additionally provide PAC guarantees on the accuracy of the bounds generated.


Author(s):  
Supratik Chakraborty ◽  
Kuldeep S. Meel ◽  
Moshe Y. Vardi

Model counting, or counting solutions of a set of constraints, is a fundamental problem in Computer Science with diverse applications. Since exact counting is computationally hard (#P complete), approximate counting techniques have received much attention over the past few decades. In this chapter, we focus on counting models of propositional formulas, and discuss in detail universal-hashing based approximate counting, which has emerged as the predominant paradigm for state-of-the-art approximate model counters. These counters are randomized algorithms that exploit properties of universal hash functions to provide rigorous approximation guarantees, while piggybacking on impressive advances in propositional satisfiability solving to scale up to problem instances with a million variables. We elaborate on various choices in designing such approximate counters and the implications of these choices. We also discuss variants of approximate model counting, such as DNF counting and weighted counting.


Sign in / Sign up

Export Citation Format

Share Document