Confidence intervals in optimal fingerprinting

2018 ◽  
Vol 52 (7-8) ◽  
pp. 4111-4126 ◽  
Author(s):  
Timothy DelSole ◽  
Laurie Trenary ◽  
Xiaoqin Yan ◽  
Michael K. Tippett
2016 ◽  
Vol 29 (6) ◽  
pp. 1977-1998 ◽  
Author(s):  
Alexis Hannart

Abstract The present paper introduces and illustrates methodological developments intended for so-called optimal fingerprinting methods, which are of frequent use in detection and attribution studies. These methods used to involve three independent steps: preliminary reduction of the dimension of the data, estimation of the covariance associated to internal climate variability, and, finally, linear regression inference with associated uncertainty assessment. It is argued that such a compartmentalized treatment presents several issues; an integrated method is thus introduced to address them. The suggested approach is based on a single-piece statistical model that represents both linear regression and control runs. The unknown covariance is treated as a nuisance parameter that is eliminated by integration. This allows for the introduction of regularization assumptions. Point estimates and confidence intervals follow from the integrated likelihood. Further, it is shown that preliminary dimension reduction is not required for implementability and that computational issues associated to using the raw, high-dimensional, spatiotemporal data can be resolved quite easily. Results on simulated data show improved performance compared to existing methods w.r.t. both estimation error and accuracy of confidence intervals and also highlight the need for further improvements regarding the latter. The method is illustrated on twentieth-century precipitation and surface temperature, suggesting a potentially high informational benefit of using the raw, nondimension-reduced data in detection and attribution (D&A), provided model error is appropriately built into the inference.


1995 ◽  
Vol 50 (12) ◽  
pp. 1102-1103 ◽  
Author(s):  
Robert W. Frick
Keyword(s):  

Marketing ZFP ◽  
2019 ◽  
Vol 41 (4) ◽  
pp. 33-42
Author(s):  
Thomas Otter

Empirical research in marketing often is, at least in parts, exploratory. The goal of exploratory research, by definition, extends beyond the empirical calibration of parameters in well established models and includes the empirical assessment of different model specifications. In this context researchers often rely on the statistical information about parameters in a given model to learn about likely model structures. An example is the search for the 'true' set of covariates in a regression model based on confidence intervals of regression coefficients. The purpose of this paper is to illustrate and compare different measures of statistical information about model parameters in the context of a generalized linear model: classical confidence intervals, bootstrapped confidence intervals, and Bayesian posterior credible intervals from a model that adapts its dimensionality as a function of the information in the data. I find that inference from the adaptive Bayesian model dominates that based on classical and bootstrapped intervals in a given model.


2016 ◽  
Vol 136 (5) ◽  
pp. 484-496 ◽  
Author(s):  
Yusuke Udagawa ◽  
Kazuhiko Ogimoto ◽  
Takashi Oozeki ◽  
Hideaki Ohtake ◽  
Takashi Ikegami ◽  
...  

2015 ◽  
Vol 39 (2) ◽  
pp. 199-202
Author(s):  
Wojciech Batko ◽  
Renata Bal

Abstract The assessment of the uncertainty of measurement results, an essential problem in environmental acoustic investigations, is undertaken in the paper. An attention is drawn to the - usually omitted - problem of the verification of assumptions related to using the classic methods of the confidence intervals estimation, for the controlled measuring quantity. Especially the paper directs attention to the need of the verification of the assumption of the normal distribution of the measuring quantity set, being the base for the existing and binding procedures of the acoustic measurements assessment uncertainty. The essence of the undertaken problem concerns the binding legal and standard acts related to acoustic measurements and recommended in: 'Guide to the expression of uncertainty in measurement' (GUM) (OIML 1993), developed under the aegis of the International Bureau of Measures (BIPM). The model legitimacy of the hypothesis of the normal distribution of the measuring quantity set in acoustic measurements is discussed and supplemented by testing its likelihood on the environment acoustic results. The Jarque-Bery test based on skewness and flattening (curtosis) distribution measures was used for the analysis of results verifying the assumption. This test allows for the simultaneous analysis of the deviation from the normal distribution caused both by its skewness and flattening. The performed experiments concerned analyses of the distribution of sound levels: LD, LE, LN, LDWN, being the basic noise indicators in assessments of the environment acoustic hazards.


Sign in / Sign up

Export Citation Format

Share Document