nonparametric bayesian statistics
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 5)

H-INDEX

2
(FIVE YEARS 2)

Author(s):  
Joshua D. Karslake ◽  
Eric D. Donarski ◽  
Sarah A. Shelby ◽  
Lucas M. Demey ◽  
Victor J. DiRita ◽  
...  

2019 ◽  
Author(s):  
Danielle Navarro ◽  
Tom Griffiths

One of the central problems in cognitive science is determining the mental representations that underlie human inferences. Solutions to this problem often rely on the analysis of subjective similarity judgments, on the assumption that recognizing likenesses between people, objects, and events is crucial to everyday inference. One such solution is provided by the additive clustering model, which is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. Existing approaches for implementing additive clustering often lack a complete framework for statistical inference, particularly with respect to choosing the number of features. To address these problems, this article develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features and their importance.


2019 ◽  
Author(s):  
Danielle Navarro ◽  
Tom Griffiths ◽  
Mark Steyvers ◽  
Michael David Lee

We introduce a Bayesian framework for modeling individual differences, in which subjects are assumed to belong to one of a potentially infinite number of groups. In this model, the groups observed in any particular data set are not viewed as a fixed set that fully explains the variation between individuals, but rather as representatives of a latent, arbitrarily rich structure. As more people are seen, and more details about the individual differences are revealed, the number of inferred groups is allowed to grow. We use the Dirichlet process—a distribution widely used in nonparametric Bayesian statistics—to define a prior for the model, allowing us to learn flexible parameter distributions without overfitting the data, or requiring the complex computations typically required for determining the dimensionality of a model. As an initial demonstration of the approach, we present three applications that analyze the individual differences in category learning, choice of publication outlets, and web browsing behavior.


2019 ◽  
Author(s):  
Adam N Sanborn ◽  
Tom Griffiths ◽  
Danielle Navarro

Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences


2019 ◽  
Author(s):  
J.D. Karslake ◽  
E.D. Donarski ◽  
S.A. Shelby ◽  
L.M. Demey ◽  
V.J. DiRita ◽  
...  

AbstractSingle-molecule fluorescence microscopy probes nanoscale, subcellular biology in real time. Existing methods for analyzing single-particle tracking data provide dynamical information, but can suffer from supervisory biases and high uncertainties. Here, we introduce a new approach to analyzing single-molecule trajectories: the Single-Molecule Analysis by Unsupervised Gibbs sampling (SMAUG) algorithm, which uses nonparametric Bayesian statistics to uncover the whole range of information contained within a single-particle trajectory (SPT) dataset. Even in complex systems where multiple biological states lead to a number of observed mobility states, SMAUG provides the number of mobility states, the average diffusion coefficient of single molecules in that state, the fraction of single molecules in that state, the localization noise, and the probability of transitioning between two different states. In this paper, we provide the theoretical background for the SMAUG analysis and then we validate the method using realistic simulations of SPT datasets as well as experiments on a controlled in vitro system. Finally, we demonstrate SMAUG on real experimental systems in both prokaryotes and eukaryotes to measure the motions of the regulatory protein TcpP in Vibrio cholerae and the dynamics of the B-cell receptor antigen response pathway in lymphocytes. Overall, SMAUG provides a mathematically rigorous approach to measuring the real-time dynamics of molecular interactions in living cells.Statement of SignificanceSuper-resolution microscopy allows researchers access to the motions of individual molecules inside living cells. However, due to experimental constraints and unknown interactions between molecules, rigorous conclusions cannot always be made from the resulting datasets when model fitting is used. SMAUG (Single-Molecule Analysis by Unsupervised Gibbs sampling) is an algorithm that uses Bayesian statistical methods to uncover the underlying behavior masked by noisy datasets. This paper outlines the theory behind the SMAUG approach, discusses its implementation, and then uses simulated data and simple experimental systems to show the efficacy of the SMAUG algorithm. Finally, this paper applies the SMAUG method to two model living cellular systems—one bacterial and one mammalian—and reports the dynamics of important membrane proteins to demonstrate the usefulness of SMAUG to a variety of systems.


2008 ◽  
Vol 20 (11) ◽  
pp. 2597-2628 ◽  
Author(s):  
Daniel J. Navarro ◽  
Thomas L. Griffiths

One of the central problems in cognitive science is determining the mental representations that underlie human inferences. Solutions to this problem often rely on the analysis of subjective similarity judgments, on the assumption that recognizing likenesses between people, objects, and events is crucial to everyday inference. One such solution is provided by the additive clustering model, which is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. Existing approaches for implementing additive clustering often lack a complete framework for statistical inference, particularly with respect to choosing the number of features. To address these problems, this article develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features and their importance.


Sign in / Sign up

Export Citation Format

Share Document