scholarly journals Estimating Monthly Precipitation Reconstruction Uncertainty Beginning in 1900

2013 ◽  
Vol 30 (6) ◽  
pp. 1107-1122 ◽  
Author(s):  
Thomas M. Smith ◽  
Samuel S. P. Shen ◽  
Li Ren ◽  
Phillip A. Arkin

Abstract Uncertainty estimates are computed for a statistical reconstruction of global monthly precipitation that was developed in an earlier publication. The reconstruction combined the use of spatial correlations with gauge precipitation and correlations between precipitation and related data beginning in 1900. Several types of errors contribute to uncertainty, including errors associated with the reconstruction method and input data errors. This reconstruction includes the use of correlated data for the ocean-area first guess, which contributes to much of the uncertainty over those regions. Errors associated with the input data include random, sampling, and bias errors. Random and bias data errors are mostly filtered out of the reconstruction analysis and are the smallest components of the total error. The largest errors are associated with sampling and the method, which together dominate the total error. The uncertainty estimates in this study indicate that (i) over oceans the reconstruction is most reliable in the tropics, especially the Pacific, because of the large spatial scales of ENSO; (ii) over the high-latitude oceans multidecadal variations are fairly reliable, but many month-to-month variations are not; and (iii) over- and near-land errors are much smaller because of local gauge. The reconstruction indicates that the average precipitation increases early in the twentieth century, followed by several decades of multidecadal variations with little trend until near the end of the century, when precipitation again appears to systematically increase. The uncertainty estimates indicate that the average changes over land are most reliable, while over oceans the average change over the reconstruction period is slightly larger than the uncertainty.

2005 ◽  
Vol 52 (6) ◽  
pp. 167-175 ◽  
Author(s):  
K. Beven

A consideration of model structural error leads to some particularly interesting tensions in the model calibration/conditioning process. In applying models we can usually only assess the total error on some output variable for which we have observations. This total error may arise due to input and boundary condition errors, model structural errors and error on the output observation itself (not only measurement error but also as a result of differences in meaning between what is modelled and what is measured). Statistical approaches to model uncertainty generally assume that the errors can be treated as an additive term on the (possibly transformed) model output. This allows for compensation of all the sources of error, as if the model predictions are correct and the total error can be treated as “measurement error.” Model structural error is not easily evaluated within this framework. An alternative approach to put more emphasis on model evaluation and rejection is suggested. It is recognised that model success or failure within this framework will depend heavily on an assessment of both input data errors (the “perfect” model will not produce acceptable results if driven with poor input data) and effective observation error (including a consideration of the meaning of observed variables relative to those predicted by a model).


2001 ◽  
Vol 6 (2) ◽  
pp. 15-28 ◽  
Author(s):  
K. Dučinskas ◽  
J. Šaltytė

The problem of classification of the realisation of the stationary univariate Gaussian random field into one of two populations with different means and different factorised covariance matrices is considered. In such a case optimal classification rule in the sense of minimum probability of misclassification is associated with non-linear (quadratic) discriminant function. Unknown means and the covariance matrices of the feature vector components are estimated from spatially correlated training samples using the maximum likelihood approach and assuming spatial correlations to be known. Explicit formula of Bayes error rate and the first-order asymptotic expansion of the expected error rate associated with quadratic plug-in discriminant function are presented. A set of numerical calculations for the spherical spatial correlation function is performed and two different spatial sampling designs are compared.


2018 ◽  
Vol 616 ◽  
pp. A2 ◽  
Author(s):  
L. Lindegren ◽  
J. Hernández ◽  
A. Bombrun ◽  
S. Klioner ◽  
U. Bastian ◽  
...  

Context. Gaia Data Release 2 (Gaia DR2) contains results for 1693 million sources in the magnitude range 3 to 21 based on observations collected by the European Space Agency Gaia satellite during the first 22 months of its operational phase. Aims. We describe the input data, models, and processing used for the astrometric content of Gaia DR2, and the validation of these resultsperformed within the astrometry task. Methods. Some 320 billion centroid positions from the pre-processed astrometric CCD observations were used to estimate the five astrometric parameters (positions, parallaxes, and proper motions) for 1332 million sources, and approximate positions at the reference epoch J2015.5 for an additional 361 million mostly faint sources. These data were calculated in two steps. First, the satellite attitude and the astrometric calibration parameters of the CCDs were obtained in an astrometric global iterative solution for 16 million selected sources, using about 1% of the input data. This primary solution was tied to the extragalactic International Celestial Reference System (ICRS) by means of quasars. The resulting attitude and calibration were then used to calculate the astrometric parameters of all the sources. Special validation solutions were used to characterise the random and systematic errors in parallax and proper motion. Results. For the sources with five-parameter astrometric solutions, the median uncertainty in parallax and position at the reference epoch J2015.5 is about 0.04 mas for bright (G < 14 mag) sources, 0.1 mas at G = 17 mag, and 0.7 masat G = 20 mag. In the proper motion components the corresponding uncertainties are 0.05, 0.2, and 1.2 mas yr−1, respectively.The optical reference frame defined by Gaia DR2 is aligned with ICRS and is non-rotating with respect to the quasars to within 0.15 mas yr−1. From the quasars and validation solutions we estimate that systematics in the parallaxes depending on position, magnitude, and colour are generally below 0.1 mas, but the parallaxes are on the whole too small by about 0.03 mas. Significant spatial correlations of up to 0.04 mas in parallax and 0.07 mas yr−1 in proper motion are seen on small (< 1 deg) and intermediate (20 deg) angular scales. Important statistics and information for the users of the Gaia DR2 astrometry are given in the appendices.


2013 ◽  
Vol 747 ◽  
pp. 571-574 ◽  
Author(s):  
Zulkifli Mohamad Ariff ◽  
T.H. Khang

The possibility of using Cadmould software to simulate the filling behaviour of a natural rubber compound during an injection moulding process was investigated. For the simulation process, the determination of required material input data involving the rheological and cure kinetics data of the designed rubber compound were conducted. It was discovered that the acquired data were able to function as reliable material input data as they were comparable with related data available in the Cadmould software materials database. Verification of the simulated filling profiles by experimental short shots specimens showed that the Cadmould Rubber Package was able to predict the realistic filling behaviour of the formulated natural rubber compound inside the mould cavity when the measured material data were utilized. Whereas, the usage of available material database from the software failed to model the mould filling progression of the intended natural rubber compound.


2020 ◽  
Vol 24 (4) ◽  
pp. 2061-2081 ◽  
Author(s):  
Xudong Zhou ◽  
Jan Polcher ◽  
Tao Yang ◽  
Ching-Sheng Huang

Abstract. Ensemble estimates based on multiple datasets are frequently applied once many datasets are available for the same climatic variable. An uncertainty estimate based on the difference between the ensemble datasets is always provided along with the ensemble mean estimate to show to what extent the ensemble members are consistent with each other. However, one fundamental flaw of classic uncertainty estimates is that only the uncertainty in one dimension (either the temporal variability or the spatial heterogeneity) can be considered, whereas the variation along the other dimension is dismissed due to limitations in algorithms for classic uncertainty estimates, resulting in an incomplete assessment of the uncertainties. This study introduces a three-dimensional variance partitioning approach and proposes a new uncertainty estimation (Ue) that includes the data uncertainties in both spatiotemporal scales. The new approach avoids pre-averaging in either of the spatiotemporal dimensions and, as a result, the Ue estimate is around 20 % higher than the classic uncertainty metrics. The deviation of Ue from the classic metrics is apparent for regions with strong spatial heterogeneity and where the variations significantly differ in temporal and spatial scales. This shows that classic metrics underestimate the uncertainty through averaging, which means a loss of information in the variations across spatiotemporal scales. Decomposing the formula for Ue shows that Ue has integrated four different variations across the ensemble dataset members, while only two of the components are represented in the classic uncertainty estimates. This analysis of the decomposition explains the correlation as well as the differences between the newly proposed Ue and the two classic uncertainty metrics. The new approach is implemented and analysed with multiple precipitation products of different types (e.g. gauge-based products, merged products and GCMs) which contain different sources of uncertainties with different magnitudes. Ue of the gauge-based precipitation products is the smallest, while Ue of the other products is generally larger because other uncertainty sources are included and the constraints of the observations are not as strong as in gauge-based products. This new three-dimensional approach is flexible in its structure and particularly suitable for a comprehensive assessment of multiple datasets over large regions within any given period.


2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Xia Feng ◽  
Paul Houser

In this study, we developed a suite of spatially and temporally scalable Water Cycle Indicators (WCI) to examine the long-term changes in water cycle variability and demonstrated their use over the contiguous US (CONUS) during 1979–2013 using the MERRA reanalysis product. The WCI indicators consist of six water balance variables monitoring the mean conditions and extreme aspects of the changing water cycle. The variables include precipitation (P), evaporation (E), runoff (R), terrestrial water storage (dS/dt), moisture convergence flux (C), and atmospheric moisture content (dW/dt). Means are determined as the daily total value, while extremes include wet and dry extremes, defined as the upper and lower 10th percentile of daily distribution. Trends are assessed for annual and seasonal indicators at several different spatial scales. Our results indicate that significant changes have occurred in most of the indicators, and these changes are geographically and seasonally dependent. There are more upward trends than downward trends in all eighteen annual indicators averaged over the CONUS. The spatial correlations between the annual trends in means and extremes are statistically significant across the country and are stronger forP,E,R, andCcompared todS/dtanddW/dt.


2019 ◽  
Author(s):  
Владимир Афанасьев ◽  
Vladimir Afanasyev ◽  
Алексей Волобой ◽  
Alexey Voloboy

This paper describes using of per-voxel RANSAC approach in ART tomography. The method works as an addition to any ART and does not depend on its internal details. Firstly, the histograms of voxel map corrections are constructed in each voxel during usual pass of ART. Then, they are used to refine the absorption map. It allows to improve resulting voxel absorption map, reducing ghost effects caused by input data errors and inconsistency. This method was demonstrated with optical tomography algorithm as it has certain difficulties with input data. The proposed algorithm was implemented to run on GPU.


2021 ◽  
pp. 46-55
Author(s):  
А.В. Никитин ◽  
А.В. Михайлов ◽  
А.С. Петров ◽  
С.Э. Попов

A technique for determining the depth and opening of a surface two-dimensional defect in a ferromagnet is presented, that is resistant to input data errors. Defects and magnetic transducers are located on opposite sides of the metal plate. The nonlinear properties of the ferromagnet are taken into account. The components of the magnetic field in the metal were reconstructed from the measured components of the magnetic field above the defect-free surface of the metal. As a result of numerical experiments, the limits of applicability of the method were obtained. The results of the technique have been verified experimentally.


2019 ◽  
Vol 93 (12) ◽  
pp. 2543-2552 ◽  
Author(s):  
Andreas Kvas ◽  
Torsten Mayer-Gürr

Abstract In this article, we present a computationally efficient method to incorporate background model uncertainties into the gravity field recovery process. While the geophysical models typically used during the processing of GRACE data, such as the atmosphere and ocean dealiasing product, have been greatly improved over the last years, they are still a limiting factor of the overall solution quality. Our idea is to use information about the uncertainty of these models to find a more appropriate stochastic model for the GRACE observations within the least squares adjustment, thus potentially improving the gravity field estimates. We used the ESA Earth System Model to derive uncertainty estimates for the atmosphere and ocean dealiasing product in the form of an autoregressive model. To assess our approach, we computed time series of monthly GRACE solutions from L1B data in the time span of 2005 to 2010 with and without the derived error model. Intercomparisons between these time series show that noise is reduced on all spatial scales, with up to 25% RMS reduction for Gaussian filter radii from 250 to 300 km, while preserving the monthly signal. We further observe a better agreement between formal and empirical errors, which supports our conclusion that used uncertainty information does improve the stochastic description of the GRACE observables.


Author(s):  
N. Polydorides ◽  
S.-A. Tsekenis ◽  
H. McCann ◽  
V.-D. A. Prat ◽  
P. Wright

We present a computationally efficient reconstruction method for the limited-data chemical species tomography problem that incorporates projection of the unknown gas concentration function onto a low-dimensional subspace, and regularization using prior information obtained from a simple flow model. In this context, the contribution of this work is on the analysis of the projection-induced data errors and the calculation of bounds for the overall image error incorporating the impact of projection and regularization errors as well as measurement noise. As an extension to this methodology, we present a variant algorithm that preserves the positivity of the concentration image.


Sign in / Sign up

Export Citation Format

Share Document