scholarly journals Bayesian Inference: Understanding Experimental Data With Informative Hypotheses

2020 ◽  
Vol 22 (11) ◽  
pp. 2118-2121
Author(s):  
Sabeeh A Baig
2020 ◽  
Vol 60 ◽  
pp. 103025 ◽  
Author(s):  
Chiara Pepi ◽  
Massimiliano Gioffrè ◽  
Mircea Grigoriu

2019 ◽  
Author(s):  
C. Vaghi ◽  
A. Rodallec ◽  
R. Fanciullino ◽  
J. Ciccolini ◽  
J. Mochel ◽  
...  

AbstractTumor growth curves are classically modeled by ordinary differential equations. In analyzing the Gompertz model several studies have reported a striking correlation between the two parameters of the model.We analyzed tumor growth kinetics within the statistical framework of nonlinear mixed-effects (population approach). This allowed for the simultaneous modeling of tumor dynamics and interanimal variability. Experimental data comprised three animal models of breast and lung cancers, with 843 measurements in 94 animals. Candidate models of tumor growth included the Exponential, Logistic and Gompertz. The Exponential and – more notably – Logistic models failed to describe the experimental data whereas the Gompertz model generated very good fits. The population-level correlation between the Gompertz parameters was further confirmed in our analysis (R2 > 0.96 in all groups). Combining this structural correlation with rigorous population parameter estimation, we propose a novel reduced Gompertz function consisting of a single individual parameter. Leveraging the population approach using bayesian inference, we estimated the time of tumor initiation using three late measurement timepoints. The reduced Gompertz model was found to exhibit the best results, with drastic improvements when using bayesian inference as compared to likelihood maximization alone, for both accuracy and precision. Specifically, mean accuracy was 12.1% versus 74.1% and mean precision was 15.2 days versus 186 days, for the breast cancer cell line.These results offer promising clinical perspectives for the personalized prediction of tumor age from limited data at diagnosis. In turn, such predictions could be helpful for assessing the extent of invisible metastasis at the time of diagnosis.Author summaryMathematical models for tumor growth kinetics have been widely used since several decades but mostly fitted to individual or average growth curves. Here we compared three classical models (Exponential, Logistic and Gompertz) using a population approach, which accounts for inter-animal variability. The Exponential and the Logistic models failed to fit the experimental data while the Gompertz model showed excellent descriptive power. Moreover, the strong correlation between the two parameters of the Gompertz equation motivated a simplification of the model, the reduced Gompertz model, with a single individual parameter and equal descriptive power. Combining the mixed-effects approach with Bayesian inference, we predicted the age of individual tumors with only few late measurements. Thanks to its simplicity, the reduced Gompertz model showed superior predictive power. Although our method remains to be extended to clinical data, these results are promising for the personalized estimation of the age of a tumor from limited measurements at diagnosis. Such predictions could contribute to the development of computational models for metastasis.


2020 ◽  
Author(s):  
Colin D. Kinz-Thompson ◽  
Korak Kumar Ray ◽  
Ruben L. Gonzalez

ABSTRACTBiophysics experiments performed at single-molecule resolution contain exceptional insight into the structural details and dynamic behavior of biological systems. However, extracting this information from the corresponding experimental data unequivocally requires applying a biophysical model. Here, we discuss how to use probability theory to apply these models to single-molecule data. Many current single-molecule data analysis methods apply parts of probability theory, sometimes unknowingly, and thus miss out on the full set of benefits provided by this self-consistent framework. The full application of probability theory involves a process called Bayesian inference that fully accounts for the uncertainties inherent to single-molecule experiments. Additionally, using Bayesian inference provides a scientifically rigorous manner to incorporate information from multiple experiments into a single analysis and to find the best biophysical model for an experiment without the risk of overfitting the data. These benefits make the Bayesian approach ideal for analyzing any type of single-molecule experiment.


2018 ◽  
Vol 15 ◽  
pp. 41-45
Author(s):  
Eliška Janouchová ◽  
Anna Kučerová

<p>Modelling of heterogeneous materials based on randomness of model input parameters involves parameter identification which is focused on solving a stochastic inversion problem. It can be formulated as a search for probabilistic description of model parameters providing the distribution of the model response corresponding to the distribution of the observed data</p><p>In this contribution, a numerical model of kinematic and isotropic hardening for viscoplastic material is calibrated on a basis of experimental data from a cyclic loading test at a high temperature. Five material model parameters are identified in probabilistic setting. The core of the identification method is the Bayesian inference of uncertain statistical moments of a prescribed joint lognormal distribution of the parameters. At first, synthetic experimental data are used to verify the identification procedure, then the real experimental data are processed to calibrate the material model of copper alloy.</p>


2015 ◽  
Author(s):  
D. Sam Schwarzkopf

The problems with classical frequentist statistics have recently received much attention, yet the enthusiasm of researchers to adopt alternatives like Bayesian inference remains modest. Here I present the bootstrapped evidence test, an objective resampling procedure that takes the precision with which both the experimental and null hypothesis can be estimated into account. Simulations and reanalysis of actual experimental data demonstrate that this test minimizes false positives while maintaining sensitivity. It is equally applicable to a wide range of situations and thus minimizes problems arising from analytical flexibility. Critically, it does not dichotomize the results based on an arbitrary significance level but instead quantifies how well the data support either the alternative or the null hypothesis. It is thus particularly useful in situations with considerable uncertainty about the expected effect size. Because it is non-parametric, it is also robust to severe violations of assumptions made by classical statistics.


2021 ◽  
Author(s):  
Fabian Jirasek ◽  
Robert Bamler ◽  
Stephan Mandt

We present a generic way to hybridize physical and data-driven methods for predicting physicochemical properties. The approach ‘distills’ the physical method's predictions into a prior model and combines it with sparse experimental data using Bayesian inference. We apply the new approach to predict activity coefficients at infinite dilution and obtain significant improvements compared to the physical and data-driven baselines and established ensemble methods from the machine learning literature.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexander Schreiber ◽  
Edgar Onea

In successful communication, the literal meaning of linguistic utterances is often enriched by pragmatic inferences. Part of the pragmatic reasoning underlying such inferences has been successfully modeled as Bayesian goal recognition in the Rational Speech Act (RSA) framework. In this paper, we try to model the interpretation of question-answer sequences with narrow focus in the answer in the RSA framework, thereby exploring the effects of domain size and prior probabilities on interpretation. Should narrow focus exhaustivity inferences be actually based on Bayesian inference involving prior probabilities of states, RSA models should predict a dependency of exhaustivity on these factors. We present experimental data that suggest that interlocutors do not act according to the predictions of the RSA model and that exhaustivity is in fact approximately constant across different domain sizes and priors. The results constitute a conceptual challenge for Bayesian accounts of the underlying pragmatic inferences.


Sign in / Sign up

Export Citation Format

Share Document