sampling uncertainty
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 21)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Anthony Hammond

Abstract The UK standard for estimating flood frequencies is outlined by the flood estimation handbook (FEH) and associated updates. Estimates inevitably come with uncertainty due to sampling error as well as model and measurement error. Using resampling approaches adapted to the FEH methods, this paper quantifies the sampling uncertainty for single site, pooled (ungauged), enhanced single site (gauged pooling) and across catchment types. This study builds upon previous progress regarding easily applicable quantifications of FEH-based uncertainty estimation (Kjeldsen 2015, 2021; Dixon 2017). Where these previous studies have provided simple analytical expressions for quantifying uncertainty for single site and ungauged design flow estimates, this study provides an easy-to-use method for quantifying uncertainty for enhanced single site estimates.


Author(s):  
Ruya Bulut ◽  
Perihan Yolci Omeroglu ◽  
Busra Acoglu ◽  
Elif Koc Alibasoglu

2021 ◽  
Author(s):  
Antje Weisheimer ◽  
Damien Decremer ◽  
David MacLeod ◽  
Chris O'Reilly ◽  
Tim Stockdale ◽  
...  

<p>Predictions of the winter NAO and its small signal-to-noise ratio have been a matter of much discussion recently. Here we look at the problem from the perspective of 110-year-long historical hindcasts over the period 1901-2010 performed with ECMWF’s coupled model. Seasonal forecast skill of the NAO can undergo pronounced multidecadal variations: while skill drops in the middle of the century, the performance of the reforecasts recovers in the early twentieth century, suggesting that the mid-century drop in skill is not due to a lack of good observational data. We hypothesize instead that these changes in model predictability are linked to intrinsic changes of the coupled climate system. </p><p>The confidence of these predictions, and thus the signal-to-noise behaviour, also strongly depends on the specific hindcast period. Correlation-based measures like the Ratio of Predictable Components are shown to be highly sensitive to the strength of the predictable signal, implying that disentangling of physical deficiencies in the models on the one hand, and the effects of sampling uncertainty on the other hand, is difficult. These findings demonstrate that relatively short hindcasts are not sufficiently representative for longer-term behaviour and can lead to skill estimates that may not be robust in the future.</p><p>See also: Weisheimer, A., D. Decremer, D. MacLeod, C. O'Reilly, T. Stockdale, S. Johnson and T.N. Palmer (2019). How confident are predictability estimates of the winter North Atlantic Oscillation? Q. J. R. Meteorol. Soc., <strong>145</strong>, 140-159, doi:10.1002/qj.3446.</p>


2021 ◽  
Vol 2020 (010r1) ◽  
pp. 1-62
Author(s):  
Edward P. Herbst ◽  
◽  
Benjamin K. Johannsen ◽  

Local projections (LPs) are a popular tool in macroeconomic research. We show that LPs are often used with very small samples in the time dimension. Consequently, LP point estimates can be severely biased. We derive simple expressions for this bias and propose a way to bias-correct LPs. Small sample bias can also lead autocorrelation-robust standard errors to dramatically understate sampling uncertainty. We argue they should be avoided in LPs like the ones we study. Using identified monetary policy shocks, we demonstrate that the bias in point estimates can be economically meaningful and the bias in standard errors can affect inference.


2020 ◽  
Vol 142 (11) ◽  
Author(s):  
Sangjune Bae ◽  
Chanyoung Park ◽  
Nam H. Kim

Abstract An approach is proposed to quantify the uncertainty in probability of failure using a Gaussian process (GP) and to estimate uncertainty change before actually adding samples to GP. The approach estimates the coefficient of variation (CV) of failure probability due to prediction variance of GP. The CV is estimated using single-loop Monte Carlo simulation (MCS), which integrates the probabilistic classification function while replacing expensive multi-loop MCS. The methodology ensures a conservative estimate of CV, in order to compensate for sampling uncertainty in MCS. Uncertainty change is estimated by adding a virtual sample from the current GP and calculating the change in CV, which is called expected uncertainty change (EUC). The proposed method can help adaptive sampling schemes to determine when to stop before adding a sample. In numerical examples, the proposed method is used in conjunction with the efficient local reliability analysis to calculate the reliability of analytical function as well as the battery drop test simulation. It is shown that the EUC converges to the true uncertainty change as the model becomes accurate.


2020 ◽  
Author(s):  
Simon Klau ◽  
Felix D. Schönbrodt ◽  
Chirag Patel ◽  
john Ioannidis ◽  
Anne-Laure Boulesteix ◽  
...  

Researchers have great flexibility in the analysis of observational data. If combined with selective reporting and pressure to publish, this flexibility can have devastating consequences on the validity of research findings. We extend the recently proposed vibration of effects approach to provide a framework comparing three main sources of uncertainty which lead to instability in observational associations, namely data pre-processing, model and sampling uncertainty. We analyze their behavior for varying sample sizes for two associations in personality psychology. While all types of vibration show a decrease for increasing sample sizes, data pre-processing and model vibration remain non-negligible, even for a sample of over 80000 participants. The increasing availability of large data sets that are not initially recorded for research purposes can make data pre-processing and model choices very influential. We therefore recommend the framework as a tool for the transparent reporting of the stability of research findings.


Sign in / Sign up

Export Citation Format

Share Document