Provable Efficient Skeleton Learning of Encodable Discrete Bayes Nets in Poly-Time and Sample Complexity

Author(s):  
Adarsh Bank ◽  
Jean Honorio
Keyword(s):  
2021 ◽  
Vol 20 (8) ◽  
Author(s):  
Wooyeong Song ◽  
Marcin Wieśniak ◽  
Nana Liu ◽  
Marcin Pawłowski ◽  
Jinhyoung Lee ◽  
...  

2020 ◽  
Vol 415 ◽  
pp. 286-294
Author(s):  
Hassan Hafez-Kolahi ◽  
Shohreh Kasaei ◽  
Mahdiyeh Soleymani-Baghshah
Keyword(s):  

2012 ◽  
Vol 91 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Lena Chekina ◽  
Dan Gutfreund ◽  
Aryeh Kontorovich ◽  
Lior Rokach ◽  
Bracha Shapira
Keyword(s):  

2008 ◽  
Vol 8 (3&4) ◽  
pp. 345-358
Author(s):  
M. Hayashi ◽  
A. Kawachi ◽  
H. Kobayashi

One of the central issues in the hidden subgroup problem is to bound the sample complexity, i.e., the number of identical samples of coset states sufficient and necessary to solve the problem. In this paper, we present general bounds for the sample complexity of the identification and decision versions of the hidden subgroup problem. As a consequence of the bounds, we show that the sample complexity for both of the decision and identification versions is $\Theta(\log|\HH|/\log p)$ for a candidate set $\HH$ of hidden subgroups in the case \REVISE{where the candidate nontrivial subgroups} have the same prime order $p$, which implies that the decision version is at least as hard as the identification version in this case. In particular, it does so for the important \REVISE{cases} such as the dihedral and the symmetric hidden subgroup problems. Moreover, the upper bound of the identification is attained \REVISE{by a variant of the pretty good measurement}. \REVISE{This implies that the concept of the pretty good measurement is quite useful for identification of hidden subgroups over an arbitrary group with optimal sample complexity}.


Author(s):  
Mike Oaksford ◽  
Nick Chater

There are deep intuitions that the meaning of conditional statements relate to probabilistic law-like dependencies. In this chapter it is argued that these intuitions can be captured by representing conditionals in causal Bayes nets (CBNs) and that this conjecture is theoretically productive. This proposal is borne out in a variety of results. First, causal considerations can provide a unified account of abstract and causal conditional reasoning. Second, a recent model (Fernbach & Erb, 2013) can be extended to the explicit causal conditional reasoning paradigm (Byrne, 1989), making some novel predictions on the way. Third, when embedded in the broader cognitive system involved in reasoning, causal model theory can provide a novel explanation for apparent violations of the Markov condition in causal conditional reasoning (Ali et al, 2011). Alternative explanations are also considered (see, Rehder, 2014a) with respect to this evidence. While further work is required, the chapter concludes that the conjecture that conditional reasoning is underpinned by representations and processes similar to CBNs is indeed a productive line of research.


2011 ◽  
Vol 20 (8) ◽  
pp. 909 ◽  
Author(s):  
T. D. Penman ◽  
O. Price ◽  
R. A. Bradstock

Wildfire can result in significant economic costs with inquiries following such events often recommending an increase in management effort to reduce the risk of future losses. Currently, there are no objective frameworks in which to assess the relative merits of management actions or the synergistic way in which the various combinations may act. We examine the value of Bayes Nets as a method for assessing the risk reduction from fire management practices using a case study from a forested landscape. Specifically, we consider the relative reduction in wildfire risk from investing in prescribed burning, initial or rapid attack and suppression. The Bayes Net was developed using existing datasets, a process model and expert opinion. We compared the results of the models with the recorded fire data for an 11-year period from 1997 to 2000 with the model successfully duplicating these data. Initial attack and suppression effort had the greatest effect on the distribution of the fire sizes for a season. Bayes Nets provide a holistic model for considering the effect of multiple fire management methods on the risk of wildfires. The methods could be further advanced by including the costs of management and conducting a formal decision analysis.


2018 ◽  
Vol 7 (3) ◽  
pp. 581-604 ◽  
Author(s):  
Armin Eftekhari ◽  
Michael B Wakin ◽  
Rachel A Ward

Abstract Leverage scores, loosely speaking, reflect the importance of the rows and columns of a matrix. Ideally, given the leverage scores of a rank-r matrix $M\in \mathbb{R}^{n\times n}$, that matrix can be reliably completed from just $O (rn\log ^{2}n )$ samples if the samples are chosen randomly from a non-uniform distribution induced by the leverage scores. In practice, however, the leverage scores are often unknown a priori. As such, the sample complexity in uniform matrix completion—using uniform random sampling—increases to $O(\eta (M)\cdot rn\log ^{2}n)$, where η(M) is the largest leverage score of M. In this paper, we propose a two-phase algorithm called MC2 for matrix completion: in the first phase, the leverage scores are estimated based on uniform random samples, and then in the second phase the matrix is resampled non-uniformly based on the estimated leverage scores and then completed. For well-conditioned matrices, the total sample complexity of MC2 is no worse than uniform matrix completion, and for certain classes of well-conditioned matrices—namely, reasonably coherent matrices whose leverage scores exhibit mild decay—MC2 requires substantially fewer samples. Numerical simulations suggest that the algorithm outperforms uniform matrix completion in a broad class of matrices and, in particular, is much less sensitive to the condition number than our theory currently requires.


Sign in / Sign up

Export Citation Format

Share Document