On the Expected Errors in Calibration of the Beremin Cleavage Model Parameters

Author(s):  
D. W. Beardsmore ◽  
H. Teng ◽  
Michael Martin

We present the detailed results of a series of Monte Carlo simulations of the Gao and Dodds calibration procedure that was carried out to determine the likely size in the errors in the Beremin cleavage model parameter estimates that might be expected for fracture toughness data sets of various sizes. The calibration process was carried out a large number of times using different sample sizes, and mean values and standard errors in the parameter estimates were determined. Modified boundary layer finite element models were used to represent high and low constraint conditions (as in the fracture tests) as well as the SSY condition. The “experimental” Jc values were obtained numerically by random sampling of a Beremin distribution function with known values of the true parameters. A number of cautionary remarks in the application of the calibration method are made.

2012 ◽  
Vol 20 (4) ◽  
pp. 35-43 ◽  
Author(s):  
Peter Valent ◽  
Ján Szolgay ◽  
Carlo Riverso

ABSTRACTMost of the studies that assess the performance of various calibration techniques have todeal with a certain amount of uncertainty in the calibration data. In this study we testedHBV model calibration procedures in hypothetically ideal conditions under the assumptionof no errors in the measured data. This was achieved by creating an artificial time seriesof the flows created by the HBV model using the parameters obtained from calibrating themeasured flows. The artificial flows were then used to replace the original flows in thecalibration data, which was then used for testing how calibration procedures can reproduceknown model parameters. The results showed that in performing one hundred independentcalibration runs of the HBV model, we did not manage to obtain parameters that werealmost identical to those used to create the artificial flow data without a certain degree ofuncertainty. Although the calibration procedure of the model works properly froma practical point of view, it can be regarded as a demonstration of the equifinality principle,since several parameter sets were obtained which led to equally acceptable or behaviouralrepresentations of the observed flows. The study demonstrated that this concept forassessing how uncertain hydrological predictions can be applied in the further developmentof a model or the choice of calibration method using artificially generated data.


2013 ◽  
Vol 17 (10) ◽  
pp. 4043-4060 ◽  
Author(s):  
D. Herckenrath ◽  
G. Fiandaca ◽  
E. Auken ◽  
P. Bauer-Gottwein

Abstract. Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM) and electrical resistivity tomography (ERT) data. In a sequential hydrogeophysical inversion (SHI) a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI). In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.


2011 ◽  
Vol 64 (9) ◽  
pp. 1926-1934 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
C. Becouze ◽  
B. Barillon

An empirical model for TSS event mean concentrations in storm weather discharges has been derived from the analysis of data sets collected in two experimental catchments (Chassieu, separate system and Ecully, combined system) in Lyon, France. Preliminary tests have shown that the values of TSS EMCs were linked to the variable X =TP ×ADWP (TP rainfall depth, ADWP antecedent dry weather period) with two distinct behaviours under and above a threshold value of X named λ: EMCs are increasing if X < λ and are decreasing if X > λ. An empirical equation is proposed for both behaviours. A specific calibration method is used to calibrate λ while the 4 other parameters of the model are calibrated by means of the Levenberg-Marquardt algorithm. The calibration results obtained with 8 events in both sites indicate that the model calibration is satisfactory: Nash Sutcliffe coefficients are all above 0.7. Monte Carlo simulations indicate a low variability of the model parameters for both sites. The model verification with 5 events in Chassieu shows maximum levels of uncertainty of approximately 20%, equivalent to levels of uncertainty observed in the calibration phase.


2020 ◽  
Vol 36 (1) ◽  
pp. 89-115 ◽  
Author(s):  
Harvey Goldstein ◽  
Natalie Shlomo

AbstractThe requirement to anonymise data sets that are to be released for secondary analysis should be balanced by the need to allow their analysis to provide efficient and consistent parameter estimates. The proposal in this article is to integrate the process of anonymisation and data analysis. The first stage uses the addition of random noise with known distributional properties to some or all variables in a released (already pseudonymised) data set, in which the values of some identifying and sensitive variables for data subjects of interest are also available to an external ‘attacker’ who wishes to identify those data subjects in order to interrogate their records in the data set. The second stage of the analysis consists of specifying the model of interest so that parameter estimation accounts for the added noise. Where the characteristics of the noise are made available to the analyst by the data provider, we propose a new method that allows a valid analysis. This is formally a measurement error model and we describe a Bayesian MCMC algorithm that recovers consistent estimates of the true model parameters. A new method for handling categorical data is presented. The article shows how an appropriate noise distribution can be determined.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
I. Elbatal ◽  
Naif Alotaibi

In this paper, a new flexible generator of continuous lifespan models referred to as the Topp-Leone Weibull G (TLWG) family is developed and studied. Several mathematical characteristics have been investigated. The new hazard rate of the new model can be “monotonically increasing,” “monotonically decreasing,” “bathtub,” and “J shape.” The Farlie Gumbel Morgenstern (FGM) and the modified FGM (MFGM) families and Clayton Copula (CCO) are used to describe and display simple type Copula. We discuss the estimation of the model parameters by the maximum likelihood (MLL) estimations. Simulations are carried out to show the consistency and efficiency of parameter estimates, and finally, real data sets are used to demonstrate the flexibility and potential usefulness of the proposed family of algorithms by using the TLW exponential model as example of the new suggested family.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. O77-O88 ◽  
Author(s):  
Zhangshuan Hou ◽  
Yoram Rubin ◽  
G. Michael Hoversten ◽  
Don Vasco ◽  
Jinsong Chen

A stochastic joint-inversion approach for estimating reservoir-fluid saturations and porosity is proposed. The approach couples seismic amplitude variation with angle (AVA) and marine controlled-source electromagnetic (CSEM) forward models into a Bayesian framework, which allows for integration of complementary information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of minimum relative entropy (MRE) is employed. Instead of single-value estimates provided by deterministic methods, the approach gives a probability distribution for any unknown parameter of interest, such as reservoir-fluid saturations or porosity at various locations. The distribution means, modes, and confidence intervals can be calculated, providing a more complete understanding of the uncertainty in the parameter estimates. The approach is demonstrated using synthetic and field data sets. Results show that joint inversion using seismic and EM data gives better estimates of reservoir parameters than estimates from either geophysical data set used in isolation. Moreover, a more informative prior leads to much narrower predictive intervals of the target parameters, with mean values of the posterior distributions closer to logged values.


2006 ◽  
Vol 84 (11) ◽  
pp. 1698-1701
Author(s):  
J. Fieberg ◽  
D.F. Staples

Hierarchical / random effect models provide a statistical framework for estimating variance parameters that describe temporal and spatial variability of vital rates in population dynamic models. In practice, estimates of variance parameters (e.g., process error) from these models are often confused with estimates of uncertainty about model parameter estimates (e.g., standard errors). These two sources of “error” have different implications for predictions from stochastic models. Estimates of process error (or variability) are useful for describing the magnitude of variation in vital rates over time and are a feature of the modeled process itself, whereas estimates of parameter standard errors (or uncertainty) are necessary for interpreting how well we are able to estimate model parameters and whether they differ among groups. The goal of this comment is to illustrate these concepts in the context of a recent paper by A.W. Reed and N.A. Slade (Can. J. Zool. 84: 635–642 (2006)) . In particular, we will show that their “hypothesis tests” involving mean parameters are actually comparisons of the estimated distributions of vital rates among groups of individuals.


2021 ◽  
Author(s):  
Udo Boehm ◽  
Nathan J. Evans ◽  
Quentin Frederik Gronau ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers ◽  
...  

Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the non-linearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work we show how these challenges can be addressed through a combination of Bayesian hierarchical modelling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study.


2017 ◽  
Vol 78 (5) ◽  
pp. 826-856 ◽  
Author(s):  
Miguel A. García-Pérez

Bock’s nominal response model (NRM) is sometimes used to identify the empirical order of response categories in polytomous items but its application tags many items as having disordered categories. Disorderly estimated categories may not reflect a true characteristic of the items but, rather, a numerically best-fitting solution possibly equivalent to other solutions with orderly estimated categories. To investigate this possibility, an order-constrained variant of the NRM was developed that enforces the preassumed order of categories on parameter estimates, for a comparison of its outcomes with those of the original unconstrained NRM. For items with ordered categories, order-constrained and unconstrained solutions should account for the data equally well even if the latter solution estimated disordered categories for some items; for items with truly disordered categories, the unconstrained solution should outperform the order-constrained solution. Criteria for this comparative analysis are defined and their utility is tested in several simulation studies with items of diverse characteristics, including ordered and disordered categories. The results demonstrate that a comparison of order-constrained and unconstrained calibrations on such criteria provides the evidence needed to determine whether category disorder estimated on some items by the original unconstrained form of the NRM is authentic or spurious. Applications of this method to assess category order in existing data sets are presented and practical implications are discussed.


1997 ◽  
Vol 36 (5) ◽  
pp. 61-68 ◽  
Author(s):  
Hermann Eberl ◽  
Amar Khelil ◽  
Peter Wilderer

A numerical method for the identification of parameters of nonlinear higher order differential equations is presented, which is based on the Levenberg-Marquardt algorithm. The estimation of the parameters can be performed by using several reference data sets simultaneously. This leads to a multicriteria optimization problem, which will be treated by using the Pareto optimality concept. In this paper, the emphasis is put on the presentation of the calibration method. As an example identification of the parameters of a nonlinear hydrological transport model for urban runoff is included, but the method can be applied to other problems as well.


Sign in / Sign up

Export Citation Format

Share Document