Objective Inference for Climate Parameters: Bayesian, Transformation-of-Variables, and Profile Likelihood Approaches

2014 ◽  
Vol 27 (19) ◽  
pp. 7270-7284 ◽  
Author(s):  
Nicholas Lewis

Abstract Insight is provided into the use of objective-Bayesian methods for estimating climate sensitivity by considering their relationship to transformations of variables in the context of a simple case considered in a previous study, and some misunderstandings about Bayesian inference are discussed. A simple model in which climate sensitivity (S) and effective ocean heat diffusivity (Kυ) are the only parameters varied is used, with twentieth-century warming attributable to greenhouse gases (AW) and effective ocean heat capacity (HC) being the only data-based observables. Probability density functions (PDFs) for AW and HC are readily derived that represent valid independent objective-Bayesian posterior PDFs, provided the error distribution assumptions involved in their construction are justified. Using them, a standard transformation of variables provides an objective joint posterior PDF for S and Kυ; integrating out Kυ gives a marginal PDF for S. Close parametric approximations to the PDFs for AW and HC are obtained, enabling derivation of likelihood functions and related noninformative priors that give rise to the objective posterior PDFs that were computed initially. Bayes’s theorem is applied to the derived AW and HC likelihood functions, demonstrating the effect of differing prior distributions on PDFs for S. Use of the noninformative Jeffreys prior produces an identical PDF to that derived using the transformation-of-variables approach. It is shown that similar inference for S to that based on these two alternative objective-Bayesian approaches is obtained using a profile likelihood method on the derived joint likelihood function for AW and HC.

2018 ◽  
Vol 10 (10) ◽  
pp. 3671
Author(s):  
Jongseon Jeon ◽  
Suneung Ahn

The work proposed a reliability demonstration test (RDT) process, which can be employed to determine whether a finite population is accepted or rejected. Bayesian and non-Bayesian approaches were compared in the proposed RDT process, as were lot and sequential sampling. One-shot devices, such as bullets, fire extinguishers, and grenades, were used as test targets, with their functioning state expressible as a binary model. A hypergeometric distribution was adopted as the likelihood function for a finite population consisting of binary items. It was demonstrated that a beta-binomial distribution was the conjugate prior of the hypergeometric likelihood function. According to the Bayesian approach, the posterior beta-binomial distribution is used to decide on the acceptance or rejection of the population in the RDT. The proposed method in this work could be used to select item providers in a supply chain, who guarantee a predetermined reliability target and confidence level. Numerical examples show that a Bayesian approach with sequential sampling has the advantage of only requiring a small sample size to determine the acceptance of a finite population.


2016 ◽  
Vol 25 (6) ◽  
pp. 2488-2505 ◽  
Author(s):  
Il Do Ha ◽  
Nicholas J Christian ◽  
Jong-Hyeon Jeong ◽  
Junwoo Park ◽  
Youngjo Lee

Competing risks data often exist within a center in multi-center randomized clinical trials where the treatment effects or baseline risks may vary among centers. In this paper, we propose a subdistribution hazard regression model with multivariate frailty to investigate heterogeneity in treatment effects among centers from multi-center clinical trials. For inference, we develop a hierarchical likelihood (or h-likelihood) method, which obviates the need for an intractable integration over the frailty terms. We show that the profile likelihood function derived from the h-likelihood is identical to the partial likelihood, and hence it can be extended to the weighted partial likelihood for the subdistribution hazard frailty models. The proposed method is illustrated with a dataset from a multi-center clinical trial on breast cancer as well as with a simulation study. We also demonstrate how to present heterogeneity in treatment effects among centers by using a confidence interval for the frailty for each individual center and how to perform a statistical test for such heterogeneity using a restricted h-likelihood.


2001 ◽  
Vol 17 (1) ◽  
pp. 114-122 ◽  
Author(s):  
Steven H. Sheingold

Decision making in health care has become increasingly reliant on information technology, evidence-based processes, and performance measurement. It is therefore a time at which it is of critical importance to make data and analyses more relevant to decision makers. Those who support Bayesian approaches contend that their analyses provide more relevant information for decision making than do classical or “frequentist” methods, and that a paradigm shift to the former is long overdue. While formal Bayesian analyses may eventually play an important role in decision making, there are several obstacles to overcome if these methods are to gain acceptance in an environment dominated by frequentist approaches. Supporters of Bayesian statistics must find more accommodating approaches to making their case, especially in finding ways to make these methods more transparent and accessible. Moreover, they must better understand the decision-making environment they hope to influence. This paper discusses these issues and provides some suggestions for overcoming some of these barriers to greater acceptance.


2013 ◽  
Vol 56 (12) ◽  
pp. 3151-3160 ◽  
Author(s):  
Fan Lu ◽  
Hao Wang ◽  
DengHua Yan ◽  
DongDong Zhang ◽  
WeiHua Xiao

2007 ◽  
Vol 50 (1) ◽  
pp. 195-202 ◽  
Author(s):  
José A. Montoya ◽  
Eloísa Díaz-Francés ◽  
David A. Sprott

2018 ◽  
Vol 40 ◽  
pp. 06029
Author(s):  
Luiz Henrique Maldonado ◽  
Daniel Firmo Kazay ◽  
Elio Emanuel Romero Lopez

The estimation of the uncertainty associated with stage-discharge relations is a challenge to the hydrologists. Bayesian inference with likelihood estimator is a promissory approach. The choice of the likelihood function has an important impact on the capability of the model to represent the residues. This paper aims evaluate two likelihood functions with DREAM algorithm to estimate specific non-unique stage-discharge rating curves: normal likelihood function and Laplace likelihood function. The result of BaRatin is also discussed. The MCMC of the DREAM and the BaRatin algorithm have been compared and its results seem consistent for the studied case. The Laplace likelihood function presented as good results as normal likelihood function for the residues. Other gauging stations should be evaluated to attend more general conclusions.


2019 ◽  
pp. 1-9
Author(s):  
Ciara Nugent ◽  
Wentian Guo ◽  
Peter Müller ◽  
Yuan Ji

We review Bayesian and Bayesian decision theoretic approaches to subgroup analysis and applications to subgroup-based adaptive clinical trial designs. Subgroup analysis refers to inference about subpopulations with significantly distinct treatment effects. The discussion mainly focuses on inference for a benefiting subpopulation, that is, a characterization of a group of patients who benefit from the treatment under consideration more than the overall population. We introduce alternative approaches and demonstrate them with a small simulation study. Then, we turn to clinical trial designs. When the selection of the interesting subpopulation is carried out as the trial proceeds, the design becomes an adaptive clinical trial design, using subgroup analysis to inform the randomization and assignment of treatments to patients. We briefly review some related designs. There are a variety of approaches to Bayesian subgroup analysis. Practitioners should consider the type of subpopulations in which they are interested and choose their methods accordingly. We demonstrate how subgroup analysis can be carried out by different Bayesian methods and discuss how they identify slightly different subpopulations.


2011 ◽  
Vol 19 (2) ◽  
pp. 188-204 ◽  
Author(s):  
Jong Hee Park

In this paper, I introduce changepoint models for binary and ordered time series data based on Chib's hidden Markov model. The extension of the changepoint model to a binary probit model is straightforward in a Bayesian setting. However, detecting parameter breaks from ordered regression models is difficult because ordered time series data often have clustering along the break points. To address this issue, I propose an estimation method that uses the linear regression likelihood function for the sampling of hidden states of the ordinal probit changepoint model. The marginal likelihood method is used to detect the number of hidden regimes. I evaluate the performance of the introduced methods using simulated data and apply the ordinal probit changepoint model to the study of Eichengreen, Watson, and Grossman on violations of the “rules of the game” of the gold standard by the Bank of England during the interwar period.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Fan Yang ◽  
Hu Ren ◽  
Zhili Hu

The maximum likelihood estimation is a widely used approach to the parameter estimation. However, the conventional algorithm makes the estimation procedure of three-parameter Weibull distribution difficult. Therefore, this paper proposes an evolutionary strategy to explore the good solutions based on the maximum likelihood method. The maximizing process of likelihood function is converted to an optimization problem. The evolutionary algorithm is employed to obtain the optimal parameters for the likelihood function. Examples are presented to demonstrate the proposed method. The results show that the proposed method is suitable for the parameter estimation of the three-parameter Weibull distribution.


2020 ◽  
Vol 29 (10) ◽  
pp. 2919-2931
Author(s):  
Xinyi Ge ◽  
Yingwei Peng ◽  
Dongsheng Tu

Identification of a subset of patients who may be sensitive to a specific treatment is an important problem in clinical trials. In this paper, we consider the case where the treatment effect is measured by longitudinal outcomes, such as quality of life scores assessed over the duration of a clinical trial, and the subset is determined by a continuous baseline covariate, such as age and expression level of a biomarker. A threshold linear mixed model is introduced, and a smoothing maximum likelihood method is proposed to obtain the estimation of the parameters in the model. Broyden-Fletcher-Goldfarb-Shanno algorithm is employed to maximize the proposed smoothing likelihood function. The proposed procedure is evaluated through simulation studies and application to the analysis of data from a randomized clinical trial on patients with advanced colorectal cancer.


Sign in / Sign up

Export Citation Format

Share Document