scholarly journals Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

2020 ◽  
Vol 34 (04) ◽  
pp. 3097-3104
Author(s):  
Ralph Abboud ◽  
Ismail Ceylan ◽  
Thomas Lukasiewicz

Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.

2011 ◽  
Vol 40 ◽  
pp. 729-765 ◽  
Author(s):  
W. Li ◽  
P. Poupart ◽  
P. Van Beek

Previous studies have demonstrated that encoding a Bayesian network into a SAT formula and then performing weighted model counting using a backtracking search algorithm can be an effective method for exact inference. In this paper, we present techniques for improving this approach for Bayesian networks with noisy-OR and noisy-MAX relations---two relations that are widely used in practice as they can dramatically reduce the number of probabilities one needs to specify. In particular, we present two SAT encodings for noisy-OR and two encodings for noisy-MAX that exploit the structure or semantics of the relations to improve both time and space efficiency, and we prove the correctness of the encodings. We experimentally evaluated our techniques on large-scale real and randomly generated Bayesian networks. On these benchmarks, our techniques gave speedups of up to two orders of magnitude over the best previous approaches for networks with noisy-OR/MAX relations and scaled up to larger networks. As well, our techniques extend the weighted model counting approach for exact inference to networks that were previously intractable for the approach.


2021 ◽  
Vol 9 ◽  
pp. 329-345
Author(s):  
Yi Luan ◽  
Jacob Eisenstein ◽  
Kristina Toutanova ◽  
Michael Collins

Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.


2012 ◽  
Vol 35 (12) ◽  
pp. 2633 ◽  
Author(s):  
Xiang-Hong LIN ◽  
Tian-Wen ZHANG ◽  
Gui-Cang ZHANG

2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


2019 ◽  
Vol 41 (13) ◽  
pp. 3612-3625 ◽  
Author(s):  
Wang Qian ◽  
Wang Qiangde ◽  
Wei Chunling ◽  
Zhang Zhengqiang

The paper solves the problem of a decentralized adaptive state-feedback neural tracking control for a class of stochastic nonlinear high-order interconnected systems. Under the assumptions that the inverse dynamics of the subsystems are stochastic input-to-state stable (SISS) and for the controller design, Radial basis function (RBF) neural networks (NN) are used to cope with the packaged unknown system dynamics and stochastic uncertainties. Besides, the appropriate Lyapunov-Krosovskii functions and parameters are constructed for a class of large-scale high-order stochastic nonlinear strong interconnected systems with inverse dynamics. It has been proved that the actual controller can be designed so as to guarantee that all the signals in the closed-loop systems remain semi-globally uniformly ultimately bounded, and the tracking errors eventually converge in the small neighborhood of origin. Simulation example has been proposed to show the effectiveness of our results.


Sign in / Sign up

Export Citation Format

Share Document