estimate uncertainty
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 9)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 12 (2) ◽  
pp. 377-398
Author(s):  
Van Dan Dang ◽  
Hoang Chung Nguyen

The paper explores the impact of uncertainty on bank liquidity hoarding, particularly providing new insights on the nature of the impact by bank-level heterogeneity. We consider the cross-sectional dispersion of shocks to key bank variables to estimate uncertainty in the banking sector and include all banking items to construct a comprehensive measure of bank liquidity hoarding. Using a sample of Vietnamese banks during 2007–2019, we document that banks tend to increase total liquidity hoarding in response to higher uncertainty; this pattern is still valid for on- and off-balance sheet liquidity hoarding. Further analysis with bank-level heterogeneity indicates that the impact of banking uncertainty on liquidity hoarding is significantly stronger for weaker banks, i. e., banks that are smaller, more poorly capitalized, and riskier. In testing the “search for yield” hypothesis to explain the linkage between uncertainty and bank liquidity hoarding, we do not find it to be the case. Our findings remain extremely robust after multiple robustness tests.


Author(s):  
Rafael de Rezende ◽  
Katharina Egert ◽  
Ignacio Marin ◽  
Guilherme Thompson
Keyword(s):  

2021 ◽  
Vol 9 (2) ◽  
pp. 174
Author(s):  
Laughlin D. L. Barker ◽  
Louis L. Whitcomb

This paper addresses the problem of ice-relative underwater robotic vehicle navigation relative to moving or stationary contiguous sea ice. A review of previously-reported under-ice navigation methods is given, as well as motivation for the use of under-ice robotic vehicles with precision navigation capabilities. We then describe our proposed approach, which employs two or more satellite navigation beacons atop the sea ice along with other precision vehicle and ship mounted navigation sensors to estimate vehicle, ice, and ship states by means of an Extended Kalman Filter. A performances sensitivity analysis for a simulated 7.7 km under ice survey is reported. The number and the location of ice deployed satellite beacons, rotational and translational ice velocity, and separation of ship-based acoustic range sensors are varied, and their effects on estimate error and uncertainty are examined. Results suggest that increasing the number and/or separation of ice-deployed satellite beacons reduces estimate uncertainty, whereas increasing separation of ship-based acoustic range sensors has little impact on estimate uncertainty. Decreasing ice velocity is also correlated with reduced estimate uncertainty. Our analysis suggests that the proposed method is feasible and can offer scientifically useful navigation accuracy over a range of operating conditions.


2020 ◽  
Vol 58 (11) ◽  
pp. 1829-1835
Author(s):  
Ashley D. Ellis ◽  
Alexander R. Gross ◽  
Jeffrey R. Budd ◽  
W. Greg Miller

AbstractBackgroundClinical laboratories use internal quality control (QC) data to calculate standard deviation (SD) and coefficient of variation (CV) to estimate uncertainty of results and to interpret QC results. We examined the influence of different instruments, and QC and reagent lots on the CV calculated from QC data.MethodsResults for BioRad Multiqual frozen liquid QC samples over a 2-year interval were partitioned by QC and reagent lots. The mean and CV were calculated for each partition for each of three Abbott Architect c8000 instruments for measuring serum alanine amino transferase (ALT), creatinine (enzymatic), glucose and sodium.ResultsCVs differed among partitions and instruments for two QC levels by 5.8- and 3.3-fold for ALT, by 4.7- and 2.1-fold for creatinine, by 2.0- and 2.6-fold for glucose, and by 2.1- and 2.0-fold for sodium. Pooled CVs for two QC levels varied among instruments by 1.78- and 1.11-fold for ALT, by 1.63- and 1.11-fold for creatinine, by 1.08- and 1.06-fold for glucose, and by 1.24- and 1.31-fold for sodium.ConclusionsThe CVs from QC data varied substantially among QC and reagent lots and for different identical specification instruments. The CV used to estimate uncertainty for a measurement result or as the basis for interpreting individual QC results must be derived over a sufficient time interval to obtain a pooled CV that represents “typical” performance of a measuring system. An estimate of uncertainty provided to users of laboratory results will itself have uncertainty that can influence medical decisions.


2020 ◽  
Vol 142 (11) ◽  
Author(s):  
Sangjune Bae ◽  
Chanyoung Park ◽  
Nam H. Kim

Abstract An approach is proposed to quantify the uncertainty in probability of failure using a Gaussian process (GP) and to estimate uncertainty change before actually adding samples to GP. The approach estimates the coefficient of variation (CV) of failure probability due to prediction variance of GP. The CV is estimated using single-loop Monte Carlo simulation (MCS), which integrates the probabilistic classification function while replacing expensive multi-loop MCS. The methodology ensures a conservative estimate of CV, in order to compensate for sampling uncertainty in MCS. Uncertainty change is estimated by adding a virtual sample from the current GP and calculating the change in CV, which is called expected uncertainty change (EUC). The proposed method can help adaptive sampling schemes to determine when to stop before adding a sample. In numerical examples, the proposed method is used in conjunction with the efficient local reliability analysis to calculate the reliability of analytical function as well as the battery drop test simulation. It is shown that the EUC converges to the true uncertainty change as the model becomes accurate.


2020 ◽  
Vol 102 (1) ◽  
pp. 17-33 ◽  
Author(s):  
Todd E. Clark ◽  
Michael W. McCracken ◽  
Elmar Mertens

We estimate uncertainty measures for point forecasts obtained from survey data, pooling information embedded in observed forecast errors for different forecast horizons. To track time-varying uncertainty in the associated forecast errors, we derive a multiple-horizon specification of stochastic volatility. We apply our method to forecasts for various macroeconomic variables from the Survey of Professional Forecasters. Compared to simple variance approaches, our stochastic volatility model improves the accuracy of uncertainty measures for survey forecasts.


Paleobiology ◽  
2018 ◽  
Vol 44 (4) ◽  
pp. 561-574 ◽  
Author(s):  
Melanie J. Hopkins ◽  
David W. Bapst ◽  
Carl Simpson ◽  
Rachel C. M. Warnock

AbstractThe two major approaches to studying macroevolution in deep time are the fossil record and reconstructed relationships among extant taxa from molecular data. Results based on one approach sometimes conflict with those based on the other, with inconsistencies often attributed to inherent flaws of one (or the other) data source. Any contradiction between the molecular and fossil records represents a failure of our ability to understand the imperfections of our data, as both are limited reflections of the same evolutionary history. We therefore need to develop conceptual and mathematical models that jointly explain our observations in both records. Fortunately, the different limitations of each record provide an opportunity to test or calibrate the other, and new methodological developments leverage both records simultaneously. However, we must reckon with the distinct relationships between sampling and time in the fossil record and molecular phylogenies. These differences impact our recognition of baselines and the analytical incorporation of age estimate uncertainty.


Sign in / Sign up

Export Citation Format

Share Document