2015 ◽  
Author(s):  
Alejandro Corvalan ◽  
Emerson Melo ◽  
Robert P Sherman ◽  
Matthew Shum

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 312
Author(s):  
Ilze A. Auzina ◽  
Jakub M. Tomczak

Many real-life processes are black-box problems, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables, likelihood-free inference problems can be solved via Approximate Bayesian Computation (ABC). However, an optimal alternative for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose an adjusted population-based MCMC ABC method by re-defining the standard ABC parameters to discrete ones and by introducing a novel Markov kernel that is inspired by differential evolution. We first assess the proposed Markov kernel on a likelihood-based inference problem, namely discovering the underlying diseases based on a QMR-DTnetwork and, subsequently, the entire method on three likelihood-free inference problems: (i) the QMR-DT network with the unknown likelihood function, (ii) the learning binary neural network, and (iii) neural architecture search. The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.


1991 ◽  
Vol 15 (2) ◽  
pp. 123-138
Author(s):  
Joachim Biskup ◽  
Bernhard Convent

In this paper the relationship between dependency theory and first-order logic is explored in order to show how relational chase procedures (i.e., algorithms to decide inference problems for dependencies) can be interpreted as clever implementations of well known refutation procedures of first-order logic with resolution and paramodulation. On the one hand this alternative interpretation provides a deeper insight into the theoretical foundations of chase procedures, whereas on the other hand it makes available an already well established theory with a great amount of known results and techniques to be used for further investigations of the inference problem for dependencies. Our presentation is a detailed and careful elaboration of an idea formerly outlined by Grant and Jacobs which up to now seems to be disregarded by the database community although it definitely deserves more attention.


2018 ◽  
Vol 30 (11) ◽  
pp. 3072-3094 ◽  
Author(s):  
Hongqiao Wang ◽  
Jinglai Li

We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)–based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015 ). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. We then provide an adaptive algorithm to construct such an approximation, where an active learning method is used to choose the design points. With numerical examples, we illustrate that the proposed method has competitive performance against existing approaches for Bayesian computation.


PLoS ONE ◽  
2018 ◽  
Vol 13 (12) ◽  
pp. e0208499 ◽  
Author(s):  
Rodrigo Carvajal ◽  
Rafael Orellana ◽  
Dimitrios Katselis ◽  
Pedro Escárate ◽  
Juan Carlos Agüero

Author(s):  
Eugene Poh ◽  
Naser Al-Fawakari ◽  
Rachel Tam ◽  
Jordan A. Taylor ◽  
Samuel D. McDougle

ABSTRACTTo generate adaptive movements, we must generalize what we have previously learned to novel situations. The generalization of learned movements has typically been framed as a consequence of neural tuning functions that overlap for similar movement kinematics. However, as is true in many domains of human behavior, situations that require generalization can also be framed as inference problems. Here, we attempt to broaden the scope of theories about motor generalization, hypothesizing that part of the typical motor generalization function can be characterized as a consequence of top-down decisions about different movement contexts. We tested this proposal by having participants make explicit similarity ratings over traditional contextual dimensions (movement directions) and abstract contextual dimensions (target shape), and perform a visuomotor adaptation generalization task where trials varied over those dimensions. We found support for our predictions across five experiments, which revealed a tight link between subjective similarity and motor generalization. Our findings suggest that the generalization of learned motor behaviors is influenced by both low-level kinematic features and high-level inferences.


Sign in / Sign up

Export Citation Format

Share Document