On the concept of model structural error

2005 ◽  
Vol 52 (6) ◽  
pp. 167-175 ◽  
Author(s):  
K. Beven

A consideration of model structural error leads to some particularly interesting tensions in the model calibration/conditioning process. In applying models we can usually only assess the total error on some output variable for which we have observations. This total error may arise due to input and boundary condition errors, model structural errors and error on the output observation itself (not only measurement error but also as a result of differences in meaning between what is modelled and what is measured). Statistical approaches to model uncertainty generally assume that the errors can be treated as an additive term on the (possibly transformed) model output. This allows for compensation of all the sources of error, as if the model predictions are correct and the total error can be treated as “measurement error.” Model structural error is not easily evaluated within this framework. An alternative approach to put more emphasis on model evaluation and rejection is suggested. It is recognised that model success or failure within this framework will depend heavily on an assessment of both input data errors (the “perfect” model will not produce acceptable results if driven with poor input data) and effective observation error (including a consideration of the meaning of observed variables relative to those predicted by a model).

2019 ◽  
pp. 37-40
Author(s):  
O. Krychevets

This paper presents the results of an investigation into the behavior of the functions of transforming the input data errors for different types of measurement systems’ computing components in order to use their generalized models developed on the basis of the finite automata theory. It is shown that, depending on the kind and value of an input data error transformation function (metrological condition of computing components), the errors of measurement results obtained with the systems’ measuring channels have a determinate character of changes in both static and dynamic regimes of computing components. Determined are the basic dependences of the errors of measurement results upon the input data errors, and upon the types of input data transformation functions; given are the results of their calculation. The investigation results demonstrate a linear character of the dependence of measurement result errors upon the input data errors ΔХ{(tn). In addition, the transformation function calculation f = ΔY{(tn)/ΔХ{(tn) gives its steady state value f = 1,0, i.e. a computing component does not transform the input data error, and does not reverse its sign. For the iterative procedures, the input data errors do not affect the final measurement result, and its accuracy. The measurement error values Δуn depend on the iteration number, and decrease with the increasing number. Of particular interest is the behavior of the function of transforming the input data errors: first, its values are dependent upon the number of iterations; second, f < 1, which clearly shows that the input data errors decrease with the increa­sing number of iterations; and third, the availability of values f = 0 indicates that the function of transforming the input data errors is able to «swallow up» the input data error at the end of the computational procedure. For the linear-chain structures, data have been obtained for a predominantly linear dependence of the measurement error Δs on the input data error Δх, and for the absence of the chain’s transformation function f dependence on the input data errors Δх. For the computing components having a cyclic structure, typi­cal is the same dependence of measurement errors Δt on the input data errors and on the behavior of transformation function ft/x which are specific to the above mentioned computing components that rea­lize iterative procedures. The difference is that the computing components having a cyclic structure realize the so-called (sub)space iteration as opposed to the time iteration specific to the computing components considered. The computing components having a complicated structure (e.g. serial-cyclic, serial-parallel, etc.) demonstrate the dependence of measurement errors on the input data errors which is specific to the linear link that, with such a structure, is determinative for eva­luating the measurement error. Also the function of transforming the input data errors behaves similarly.


2009 ◽  
Vol 48 (6) ◽  
pp. 1230-1244 ◽  
Author(s):  
Huijuan Lu ◽  
Qin Xu

Abstract Assimilation experiments are carried out with simulated radar radial-velocity observations to examine the impacts of observation accuracy and resolutions on storm-scale wind assimilation with an ensemble square root filter (EnSRF) on a storm-resolving grid (Δx = 2 km). In this EnSRF, the background covariance is estimated from an ensemble of 40 imperfect-model predictions. The observation error includes both measurement error and representativeness error, and the error variance is estimated from the simulated observations against the simulated “truth.” The results show that the analysis is not significantly improved when the measurement error is overly reduced (from 4 to 1 m s−1) and becomes smaller than the representativeness error. The analysis can be improved by properly coarsening the observation resolution (to 2 km in the radial direction) with an increase in measurement accuracy and further improved by properly enhancing the temporal resolution of radar volume scans (from every 5 to 2 or 1 min) with a decrease in measurement accuracy. There can be an optimal balance or trade-off between measurement accuracy and resolutions (in space and time) for configuring radar scans, especially phased-array radar scans, to improve storm-scale radar wind analysis and assimilation.


2013 ◽  
Vol 30 (6) ◽  
pp. 1107-1122 ◽  
Author(s):  
Thomas M. Smith ◽  
Samuel S. P. Shen ◽  
Li Ren ◽  
Phillip A. Arkin

Abstract Uncertainty estimates are computed for a statistical reconstruction of global monthly precipitation that was developed in an earlier publication. The reconstruction combined the use of spatial correlations with gauge precipitation and correlations between precipitation and related data beginning in 1900. Several types of errors contribute to uncertainty, including errors associated with the reconstruction method and input data errors. This reconstruction includes the use of correlated data for the ocean-area first guess, which contributes to much of the uncertainty over those regions. Errors associated with the input data include random, sampling, and bias errors. Random and bias data errors are mostly filtered out of the reconstruction analysis and are the smallest components of the total error. The largest errors are associated with sampling and the method, which together dominate the total error. The uncertainty estimates in this study indicate that (i) over oceans the reconstruction is most reliable in the tropics, especially the Pacific, because of the large spatial scales of ENSO; (ii) over the high-latitude oceans multidecadal variations are fairly reliable, but many month-to-month variations are not; and (iii) over- and near-land errors are much smaller because of local gauge. The reconstruction indicates that the average precipitation increases early in the twentieth century, followed by several decades of multidecadal variations with little trend until near the end of the century, when precipitation again appears to systematically increase. The uncertainty estimates indicate that the average changes over land are most reliable, while over oceans the average change over the reconstruction period is slightly larger than the uncertainty.


Author(s):  
Mark D. Sensmeier ◽  
Kurt L. Nichol

Correlation between dynamic strain gage measurements and modal analysis results can be adversely affected by gage misplacement and gage misorientation. An optimization algorithm has been developed which allows the modeled strain gage locations and orientations to be varied within specified tolerances. An objective function is defined based on the least squares sum of the differences between experimental and model results. The Kuhn-Tucker conditions are then applied to find the gage locations and orientations which minimize this objective function. The procedure is applied on a one-time basis considering all measured modes of vibration simultaneously. This procedure minimizes instrumentation error which then allows the analyst to modify the model to more accurately represent other factors, including boundary conditions. Flat plate vibratory data was used to demonstrate a significant improvement in correlation between measured data and model predictions.


2019 ◽  
Author(s):  
Владимир Афанасьев ◽  
Vladimir Afanasyev ◽  
Алексей Волобой ◽  
Alexey Voloboy

This paper describes using of per-voxel RANSAC approach in ART tomography. The method works as an addition to any ART and does not depend on its internal details. Firstly, the histograms of voxel map corrections are constructed in each voxel during usual pass of ART. Then, they are used to refine the absorption map. It allows to improve resulting voxel absorption map, reducing ghost effects caused by input data errors and inconsistency. This method was demonstrated with optical tomography algorithm as it has certain difficulties with input data. The proposed algorithm was implemented to run on GPU.


2016 ◽  
Vol 10 (2) ◽  
pp. 613-622 ◽  
Author(s):  
Wiley Steven Bogren ◽  
John Faulkner Burkhart ◽  
Arve Kylling

Abstract. We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.


2021 ◽  
pp. 46-55
Author(s):  
А.В. Никитин ◽  
А.В. Михайлов ◽  
А.С. Петров ◽  
С.Э. Попов

A technique for determining the depth and opening of a surface two-dimensional defect in a ferromagnet is presented, that is resistant to input data errors. Defects and magnetic transducers are located on opposite sides of the metal plate. The nonlinear properties of the ferromagnet are taken into account. The components of the magnetic field in the metal were reconstructed from the measured components of the magnetic field above the defect-free surface of the metal. As a result of numerical experiments, the limits of applicability of the method were obtained. The results of the technique have been verified experimentally.


SPE Journal ◽  
2020 ◽  
Vol 25 (02) ◽  
pp. 951-968 ◽  
Author(s):  
Minjie Lu ◽  
Yan Chen

Summary Owing to the complex nature of hydrocarbon reservoirs, the numerical model constructed by geoscientists is always a simplified version of reality: for example, it might lack resolution from discretization and lack accuracy in modeling some physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when “perfect” model parameters are used as model inputs is known as “model error”. Even in a situation when the model is a perfect representation of reality, the inputs to the model are never completely known. During a typical model calibration procedure, only a subset of model inputs is adjusted to improve the agreement between model responses and historical data. The remaining model inputs that are not calibrated and are likely fixed at incorrect values result in model error in a similar manner as the imperfect model scenario. Assimilation of data without accounting for model error can result in the incorrect adjustment to model parameters, the underestimation of prediction uncertainties, and bias in forecasts. In this paper, we investigate the benefit of recognizing and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated “total error” (a combination of model error and observation error) is estimated from the data residual after a standard history-matching using the Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the estimation of model parameters and quantification of prediction uncertainty. We first illustrate the method using a synthetic 2D five-spot example, where some model errors are deliberately introduced, and the results are closely examined against the known “true” model. Then, the Norne field case is used to further evaluate the method. The Norne model has previously been history-matched using the LM-EnRML (Chen and Oliver 2014), where cell-by-cell properties (permeability, porosity, net-to-gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water/oil contacts, and relative permeability function are adjusted to honor historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, the proper use of localization, and heuristic adjustment of data noise to account for modeling error. In this paper, we improve the last aspect by quantitatively estimating model error using residual analysis.


2003 ◽  
Vol 60 (10) ◽  
pp. 1217-1228 ◽  
Author(s):  
Andre E Punt

Four methods for fitting production models, including three that account for the effects of error in the population dynamics equation (process error) and when indexing the population (observation error), are evaluated by means of Monte Carlo simulation. An estimator that represents the distributions of biomass explicitly and integrates over the unknown process errors numerically (the NISS estimator) performs best of the four estimators considered, never being the worst estimator and often being the best in terms of the medians of the absolute values of the relative errors. The total-error approach outperforms the observation-error estimator conventionally used to fit dynamic production models, and the performance of a Kalman filter based estimator is particularly poor. Although the NISS estimator is the best-performing estimator considered, its estimates of quantities of management interest are severely biased and highly imprecise for some of the scenarios considered.


2007 ◽  
Vol 56 (6) ◽  
pp. 11-18 ◽  
Author(s):  
E. Lindblom ◽  
H. Madsen ◽  
P.S. Mikkelsen

In this paper two attempts to assess the uncertainty involved with model predictions of copper loads from stormwater systems are made. In the first attempt, the GLUE methodology is applied to derive model parameter sets that result in model outputs encompassing a significant number of the measurements. In the second attempt the conceptual model is reformulated to a grey-box model followed by parameter estimation. Given data from an extensive measurement campaign, the two methods suggest that the output of the stormwater pollution model is associated with significant uncertainty. With the proposed model and input data, the GLUE analysis show that the total sampled copper mass can be predicted within a range of ±50% of the median value (385 g), whereas the grey-box analysis showed a prediction uncertainty of less than ±30%. Future work will clarify the pros and cons of the two methods and furthermore explore to what extent the estimation can be improved by modifying the underlying accumulation-washout model.


Sign in / Sign up

Export Citation Format

Share Document