Bayesian Solutions for Handling Uncertainty in Survival Extrapolation

2016 ◽  
Vol 37 (4) ◽  
pp. 367-376 ◽  
Author(s):  
Miguel A. Negrín ◽  
Julian Nam ◽  
Andrew H. Briggs

Objective. Survival extrapolation using a single, best-fit model ignores 2 sources of model uncertainty: uncertainty in the true underlying distribution and uncertainty about the stability of the model parameters over time. Bayesian model averaging (BMA) has been used to account for the former, but it can also account for the latter. We investigated BMA using a published comparison of the Charnley and Spectron hip prostheses using the original 8-year follow-up registry data. Methods. A wide variety of alternative distributions were fitted. Two additional distributions were used to address uncertainty about parameter stability: optimistic and skeptical. The optimistic (skeptical) model represented the model distribution with the highest (lowest) estimated probabilities of survival but reestimated using, as prior information, the most optimistic (skeptical) parameter estimated for intermediate follow-up periods. Distributions were then averaged assuming the same posterior probabilities for the optimistic, skeptical, and noninformative models. Cost-effectiveness was compared using both the original 8-year and extended 16-year follow-up data. Results. We found that all models obtained similar revision-free years during the observed period. In contrast, there was variability over the decision time horizon. Over the observed period, we detected considerable uncertainty in the shape parameter for Spectron. After BMA, Spectron was cost-effective at a threshold of £20,000 with 93% probability, whereas the best-fit model was 100%; by contrast, with a 16-year follow-up, it was 0%. Conclusions. This case study casts doubt on the ability of the single best-fit model selected by information criteria statistics to adequately capture model uncertainty. Under this scenario, BMA weighted by posterior probabilities better addressed model uncertainty. However, there is still value in regularly updating health economic models, even where decision uncertainty is low.

2021 ◽  
Author(s):  
Sansiddh Jain ◽  
Avtansh Tiwari ◽  
Nayana Bannur ◽  
Ayush Deva ◽  
Siddhant Shingi ◽  
...  

Forecasting infection case counts and estimating accurate epidemiological parameters are critical components of managing the response to a pandemic. This paper describes a modular, extensible framework for a COVID-19 forecasting system, primarily deployed in Mumbai and Jharkhand, India. We employ a variant of the SEIR compartmental model motivated by the nature of the available data and operational constraints. We estimate best-fit parameters using sequential Model-Based Optimization (SMBO) and describe the use of a novel, fast, and approximate Bayesian model averaging method (ABMA) for parameter uncertainty estimation that compares well with a more rigorous Markov Chain Monte Carlo (MCMC) approach in practice. We address on-the-ground deployment challenges such as spikes in the reported input data using a novel weighted smooth-ing method. We describe extensive empirical analyses to evaluate the accuracy of our method on ground truth as well as against other state-of-the-art approaches. Finally, we outline deployment lessons and describe how inferred model parameters were used by government partners to interpret the state of the epidemic and how model forecasts were used to estimate staffing and planning needs essential for addressing COVID-19 hospital burden.


Author(s):  
Ujjal Debnath

In this paper, we have considered flat Friedmann–Robertson–Walker (FRW) model of the universe and reviewed the modified Chaplygin gas as the fluid source. Associated with the scalar field model, we have determined the Hubble parameter as a generating function in terms of the scalar field. Instead of hyperbolic function, we have taken Jacobi elliptic function and Abel function in the generating function and obtained modified Chaplygin–Jacobi gas (MCJG) and modified Chaplygin–Abel gas (MCAG) equation of states, respectively. Next, we have assumed that the universe is filled in dark matter, radiation, and dark energy. The sources of dark energy candidates are assumed as MCJG and MCAG. We have constrained the model parameters by recent observational data analysis. Using [Formula: see text] minimum test (maximum likelihood estimation), we have determined the best-fit values of the model parameters by OHD[Formula: see text]CMB[Formula: see text]BAO[Formula: see text]SNIa joint data analysis. To examine the viability of the MCJG and MCAG models, we have determined the values of the deviations of information criteria like △AIC, △BIC and △DIC. The evolutions of cosmological and cosmographical parameters (like equation of state, deceleration, jerk, snap, lerk, statefinder, Om diagnostic) have been studied for our best-fit values of model parameters. To check the classical stability of the models, we have examined the values of square speed of sound [Formula: see text] in the interval [Formula: see text] for expansion of the universe.


Author(s):  
Osman Mamun ◽  
Kirsten Winther ◽  
Jacob Boes ◽  
Thomas Bligaard

For high-throughput screening of materials for heterogeneous catalysis, scaling relations provides an efficient scheme to estimate the chemisorption energies of hydrogenated species. However, conditioning on a single descriptor ignores the model uncertainty and leads to sub optimal prediction of the chemisorption energy. In this paper, we extend the single descriptor linear scaling relation to a multi descriptor linear regression models to leverage the correlation between adsorption energy of any two pair of adsorbates. With a large dataset, we use Bayesian Information Criteria (BIC) as the model evidence to select the best linear regression model that are derived from non-informative priors. Furthermore, Gaussian Process Regression (GPR) based on the meaningful convolution of physical properties of the metal-adsorbate complex can be used to predict the baseline residual of the selected model. This integrated Bayesian model selection and Gaussian process regression, dubbed as residual learning, can achieve performance comparable to standard DFT error (0.1 eV) for most adsorbate system. For sparse and small datasets, we propose an ad hoc Bayesian Model Averaging (BMA) approach to make a robust prediction. With this Bayesian framework, we significantly reduce the model uncertainty and improve the prediction accuracy. The possibilities of the framework for high-throughput catalytic materials exploration in a realistic setting is illustrated using large and small sets of both dense and sparse simulated dataset generated from a public database of bimetallic alloys available in Catalysis-Hub.org.


Author(s):  
Ujjal Debnath

In this paper, we have considered the generalized cosmic Chaplygin gas (GCCG) in the background of Brans–Dicke (BD) theory and also assumed that the Universe is filled in GCCG, dark matter and radiation. To investigate the data fitting of model parameters, we have constrained the model using recent observations. Using [Formula: see text] minimum test, the best-fit values of the model parameters are determined by OHD+CMB+BAO+SNIa joint data analysis. We have drawn the contour figures for different confidence levels [Formula: see text], [Formula: see text] and [Formula: see text]. To examine the viability of the GCCG model in BD theory, we have also determined △AIC and △BIC using the information criteria (AIC and BIC). Graphically, we have analyzed the natures of the equation of state parameter and deceleration parameter for our best-fit values of model parameters. Also, we have studied the square speed of sound [Formula: see text] which lies in the interval [Formula: see text] for expansion of the Universe. So, our considered model is classically stable by considering the best-fit values of the model parameters due to the data analysis.


Author(s):  
Manfred Kühleitner ◽  
Norbert Brunner ◽  
Katharina Renner-Martin

Using a classical example for technology diffusion, the mechanization of agriculture in Spain since 1951, we considered the forecasting-intervals from the near-optimal Bertalanffy-Pütter (BP) models. We used BP-models, as they considerably reduced the hitherto best fit (sum of squared errors) reported in literature. And we considered near-optimal models (their sum of squared errors is almost best), as they allowed to quantify model-uncertainty. This approach supplemented traditional sensitivity analyses (variation of model parameters), as for the present models and data even slight changes in the best-fit parameters resulted in very poorly fitting model curves.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Osman Mamun ◽  
Kirsten T. Winther ◽  
Jacob R. Boes ◽  
Thomas Bligaard

AbstractFor high-throughput screening of materials for heterogeneous catalysis, scaling relations provides an efficient scheme to estimate the chemisorption energies of hydrogenated species. However, conditioning on a single descriptor ignores the model uncertainty and leads to suboptimal prediction of the chemisorption energy. In this article, we extend the single descriptor linear scaling relation to a multi-descriptor linear regression models to leverage the correlation between adsorption energy of any two pair of adsorbates. With a large dataset, we use Bayesian Information Criteria (BIC) as the model evidence to select the best linear regression model. Furthermore, Gaussian Process Regression (GPR) based on the meaningful convolution of physical properties of the metal-adsorbate complex can be used to predict the baseline residual of the selected model. This integrated Bayesian model selection and Gaussian process regression, dubbed as residual learning, can achieve performance comparable to standard DFT error (0.1 eV) for most adsorbate system. For sparse and small datasets, we propose an ad hoc Bayesian Model Averaging (BMA) approach to make a robust prediction. With this Bayesian framework, we significantly reduce the model uncertainty and improve the prediction accuracy. The possibilities of the framework for high-throughput catalytic materials exploration in a realistic setting is illustrated using large and small sets of both dense and sparse simulated dataset generated from a public database of bimetallic alloys available in Catalysis-Hub.org.


2019 ◽  
Author(s):  
Osman Mamun ◽  
Kirsten Winther ◽  
Jacob Boes ◽  
Thomas Bligaard

For high-throughput screening of materials for heterogeneous catalysis, scaling relations provides an efficient scheme to estimate the chemisorption energies of hydrogenated species. However, conditioning on a single descriptor ignores the model uncertainty and leads to sub optimal prediction of the chemisorption energy. In this paper, we extend the single descriptor linear scaling relation to a multi descriptor linear regression models to leverage the correlation between adsorption energy of any two pair of adsorbates. With a large dataset, we use Bayesian Information Criteria (BIC) as the model evidence to select the best linear regression model that are derived from non-informative priors. Furthermore, Gaussian Process Regression (GPR) based on the meaningful convolution of physical properties of the metal-adsorbate complex can be used to predict the baseline residual of the selected model. This integrated Bayesian model selection and Gaussian process regression, dubbed as residual learning, can achieve performance comparable to standard DFT error (0.1 eV) for most adsorbate system. For sparse and small datasets, we propose an ad hoc Bayesian Model Averaging (BMA) approach to make a robust prediction. With this Bayesian framework, we significantly reduce the model uncertainty and improve the prediction accuracy. The possibilities of the framework for high-throughput catalytic materials exploration in a realistic setting is illustrated using large and small sets of both dense and sparse simulated dataset generated from a public database of bimetallic alloys available in Catalysis-Hub.org.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


Author(s):  
Awad Al-Zaben ◽  
Lina M.K. Al-Ebbini ◽  
Badr Qatashah

In many situations, health care professionals need to evaluate the respiration rate (RR) for home patients. Moreover, when cases are more than health care providers’ capacity, it is important to follow up cases at home. In this paper, we present a complete system that enables healthcare providers to follow up with patients with respiratory-related diseases at home. The aim is to evaluate the use of a mobile phone’s accelerometer to capture respiration waveform from different patients using mobile phones. Whereas measurements are performed by patients themselves from home, and not by professional health care personnel, the signals captured by mobile phones are subjected to many unknowns. Therefore, the validity of the signals has to be evaluated first and before any processing. Proper signal processing algorithms can be used to prepare the captured waveform for RR computations. A validity check is considered at different stages using statistical measures and pathophysiological limitations. In this paper, a mobile application is developed to capture the accelerometer signals and send the data to a server at the health care facility. The server has a database of each patient’s signals considering patient privacy and security of information. All the validations and signal processing are performed on the server side. The patient’s condition can be followed up over a few days and an alarm system may be implemented at the server-side in case of respiration deterioration or when there is a risk of a patient’s need for hospitalization. The risk is determined based on respiration signal features extracted from the received respiration signal including RR, and Autoregressive (AR) moving average (ARMA) model parameters of the signal. Results showed that the presented method can be used at a larger scale enabling health care providers to monitor a large number of patients.


Sign in / Sign up

Export Citation Format

Share Document