probabilistic circuits
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 14)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Federico Cerutti ◽  
Lance M. Kaplan ◽  
Angelika Kimmig ◽  
Murat Şensoy

Author(s):  
Xiaoting Shao ◽  
Alejandro Molina ◽  
Antonio Vergari ◽  
Karl Stelzner ◽  
Robert Peharz ◽  
...  

Author(s):  
Eric Wang ◽  
Pasha Khosravi ◽  
Guy Van den Broeck

Understanding the behavior of learned classifiers is an important task, and various black-box explanations, logical reasoning approaches, and model-specific methods have been proposed. In this paper, we introduce probabilistic sufficient explanations, which formulate explaining an instance of classification as choosing the "simplest" subset of features such that only observing those features is "sufficient" to explain the classification. That is, sufficient to give us strong probabilistic guarantees that the model will behave similarly when all features are observed under the data distribution. In addition, we leverage tractable probabilistic reasoning tools such as probabilistic circuits and expected predictions to design a scalable algorithm for finding the desired explanations while keeping the guarantees intact. Our experiments demonstrate the effectiveness of our algorithm in finding sufficient explanations, and showcase its advantages compared to Anchors and logical explanations.


2021 ◽  
Vol 15 ◽  
Author(s):  
Rafatul Faria ◽  
Jan Kaiser ◽  
Kerem Y. Camsari ◽  
Supriyo Datta

Directed acyclic graphs or Bayesian networks that are popular in many AI-related sectors for probabilistic inference and causal reasoning can be mapped to probabilistic circuits built out of probabilistic bits (p-bits), analogous to binary stochastic neurons of stochastic artificial neural networks. In order to satisfy standard statistical results, individual p-bits not only need to be updated sequentially but also in order from the parent to the child nodes, necessitating the use of sequencers in software implementations. In this article, we first use SPICE simulations to show that an autonomous hardware Bayesian network can operate correctly without any clocks or sequencers, but only if the individual p-bits are appropriately designed. We then present a simple behavioral model of the autonomous hardware illustrating the essential characteristics needed for correct sequencer-free operation. This model is also benchmarked against SPICE simulations and can be used to simulate large-scale networks. Our results could be useful in the design of hardware accelerators that use energy-efficient building blocks suited for low-level implementations of Bayesian networks. The autonomous massively parallel operation of our proposed stochastic hardware has biological relevance since neural dynamics in brain is also stochastic and autonomous by nature.


Author(s):  
Laura Isabel Galindez Olascoaga ◽  
Wannes Meert ◽  
Marian Verhelst

2020 ◽  
Author(s):  
Renato Geh ◽  
Denis Mauá ◽  
Alessandro Antonucci

Probabilistic circuits are deep probabilistic models with neural-network-like semantics capable of accurately and efficiently answering probabilistic queries without sacrificing expressiveness. Probabilistic Sentential Decision Diagrams (PSDDs) are a subclass of probabilistic circuits able to embed logical constraints to the circuit’s structure. In doing so, they obtain extra expressiveness with empirical optimal performance. Despite achieving competitive performance compared to other state-of-the-art competitors, there have been very few attempts at learning PSDDs from a combination of both data and knowledge in the form of logical formulae. Our work investigates sampling random PSDDs consistent with domain knowledge and evaluating against state-of-the-art probabilistic models. We propose a method of sampling that retains important structural constraints on the circuit’s graph that guarantee query tractability. Finally, we show that these samples are able to achieve competitive performance even on larger domains.


Author(s):  
Ming-Ting Lee ◽  
Chen-Hung Wu ◽  
Shi-Tang Liu ◽  
Cheng-Yun Hsieh ◽  
James Chien-Mo Li

Sign in / Sign up

Export Citation Format

Share Document