scholarly journals Evolutionary modeling-based approach for model errors correction

2012 ◽  
Vol 19 (4) ◽  
pp. 439-447 ◽  
Author(s):  
S. Q. Wan ◽  
W. P. He ◽  
L. Wang ◽  
W. Jiang ◽  
W. Zhang

Abstract. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

2020 ◽  
Author(s):  
Zhaolu Hou ◽  
Bin Zuo ◽  
Shaoqing Zhang ◽  
Fei Huang ◽  
Ruiqiang Ding ◽  
...  

<p>Numerical forecasts always have associated errors. Analogue correction methods combine numerical simulations with statistical analyses to reduce model forecast errors. However, identifying appropriate analogues remains a challenging task. Here, we use the Local Dynamical Analog (LDA) method to locate analogues and correct model forecast errors. As an example, an ENSO model forecast error correction experiment confirms that the LDA method locates more dynamical analogues of states of interest and better corrects forecast errors than do other methods. This is because the LDA method ensures similarity of the initial states and the evolution of both states. In addition, the LDA method can be applied using a scalar time series, which reduces the complexity of the dynamical system. Model forecast error correction using the LDA method provides a new approach to correcting state-dependent model errors and can be readily integrated with other advanced models.</p>


2019 ◽  
Vol 122 (1) ◽  
pp. 681-699 ◽  
Author(s):  
E. Tattershall ◽  
G. Nenadic ◽  
R. D. Stevens

AbstractResearch topics rise and fall in popularity over time, some more swiftly than others. The fastest rising topics are typically called bursts; for example “deep learning”, “internet of things” and “big data”. Being able to automatically detect and track bursty terms in the literature could give insight into how scientific thought evolves over time. In this paper, we take a trend detection algorithm from stock market analysis and apply it to over 30 years of computer science research abstracts, treating the prevalence of each term in the dataset like the price of a stock. Unlike previous work in this domain, we use the free text of abstracts and titles, resulting in a finer-grained analysis. We report a list of bursty terms, and then use historical data to build a classifier to predict whether they will rise or fall in popularity in the future, obtaining accuracy in the region of 80%. The proposed methodology can be applied to any time-ordered collection of text to yield past and present bursty terms and predict their probable fate.


SPE Journal ◽  
2020 ◽  
Vol 25 (02) ◽  
pp. 951-968 ◽  
Author(s):  
Minjie Lu ◽  
Yan Chen

Summary Owing to the complex nature of hydrocarbon reservoirs, the numerical model constructed by geoscientists is always a simplified version of reality: for example, it might lack resolution from discretization and lack accuracy in modeling some physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when “perfect” model parameters are used as model inputs is known as “model error”. Even in a situation when the model is a perfect representation of reality, the inputs to the model are never completely known. During a typical model calibration procedure, only a subset of model inputs is adjusted to improve the agreement between model responses and historical data. The remaining model inputs that are not calibrated and are likely fixed at incorrect values result in model error in a similar manner as the imperfect model scenario. Assimilation of data without accounting for model error can result in the incorrect adjustment to model parameters, the underestimation of prediction uncertainties, and bias in forecasts. In this paper, we investigate the benefit of recognizing and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated “total error” (a combination of model error and observation error) is estimated from the data residual after a standard history-matching using the Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the estimation of model parameters and quantification of prediction uncertainty. We first illustrate the method using a synthetic 2D five-spot example, where some model errors are deliberately introduced, and the results are closely examined against the known “true” model. Then, the Norne field case is used to further evaluate the method. The Norne model has previously been history-matched using the LM-EnRML (Chen and Oliver 2014), where cell-by-cell properties (permeability, porosity, net-to-gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water/oil contacts, and relative permeability function are adjusted to honor historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, the proper use of localization, and heuristic adjustment of data noise to account for modeling error. In this paper, we improve the last aspect by quantitatively estimating model error using residual analysis.


1969 ◽  
Vol 91 (4) ◽  
pp. 554-560 ◽  
Author(s):  
B. H. Browne

A new approach is developed for estimating and correcting digital thermal network errors based upon achieving statistical reconciliation between model predictions and observed results. The development is preceded by a review of the general theory of thermal networks and the development of the canonical forms of networks and their error models. Application of the technique promises improved thermal prediction accuracy of complex systems using simplified network models.


Soft Matter ◽  
2018 ◽  
Vol 14 (45) ◽  
pp. 9232-9242 ◽  
Author(s):  
Silvia De Sio ◽  
Christoph July ◽  
Jan K. G. Dhont ◽  
Peter R. Lang

We performed total internal reflection microscopy (TIRM) experiments to determine the depletion potentials between probe spheres and a flat glass wall, induced by rod-shaped colloids (fd-virus), and we suggest a new approach to study the spatially resolved dynamics of the probe spheres.


2016 ◽  
Author(s):  
Yosuke Niwa ◽  
Yosuke Fujii ◽  
Yousuke Sawa ◽  
Yosuke Iida ◽  
Akihiko Ito ◽  
...  

Abstract. A 4-dimensional variational method (4D-Var) is a popular technique for inverse modeling of atmospheric constituents, but it is not without problems. Using an icosahedral grid transport model and the 4D-Var method, a new atmospheric greenhouse gas (GHG) inversion system has been developed. The system combines off-line forward and adjoint models with a quasi-Newton optimization scheme. The new approach is then used to conduct identical twin experiments to investigate optimal system settings for an atmospheric CO2 inversion problem, and to demonstrate the validity of the new inversion system. It is found that a system of forward and adjoint models that has less model errors but with non-linearity performs better than another system that conserves linearity with exact adjoint relationship. Furthermore, the effectiveness of the prior error correlations is confirmed; the global error is reduced by about 15 % by adding prior error correlations that are simply designed. With the optimal setting, the new inversion system successfully reproduces the spatiotemporal variations of the surface fluxes, from regional (such as biomass burning) to a global scale. The optimization algorithm introduced in the new system does not require difficult decomposition of a matrix that establishes the correlation among the prior flux errors. This enables us to design the prior error covariance matrix more freely.


2016 ◽  
Vol 64 (4) ◽  
pp. 873-876
Author(s):  
P.Q. Baban ◽  
I.N. Rahimabadi

Abstract In this paper, a new approach towards input-output pairing for an unstable system has been proposed. First, it is demonstrated that the previous method of input-output pairing for unstable plants cannot find appropriate pairs as it only checks necessary conditions for stability and integrity. Then, a new approach using relative error matrix and genetic algorithm for finding appropriate pairs in unstable systems is proposed. As it is shown, this approach takes into consideration both static and dynamic information of plant in measuring interaction. Finally, the accuracy of proposed method is demonstrated by an example and closed loop simulation.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Zhuqing Bi ◽  
Chenming Li ◽  
Xujie Li ◽  
Hongmin Gao

According to the characteristics of fault diagnosis for pumping station, such as the complex structure, multiple mappings, and numerous uncertainties, a new approach combining T-S fuzzy gate fault tree and Bayesian network (BN) is proposed. On the one hand, traditional fault tree method needs the logical relationship between events and probability value of events and can only represent the events with two states. T-S fuzzy gate fault tree method can solve these disadvantages but still has weaknesses in complex reasoning and only one-way reasoning. On the other hand, the BN is suitable for fault diagnosis of pumping station because of its powerful ability to deal with uncertain information. However, it is difficult to determine the structure and conditional probability tables of the BN. Therefore, the proposed method integrates the advantages of the two methods. Finally, the feasibility of the method is verified through a fault diagnosis model of the rotor in the pumping unit, the accuracy of the method is verified by comparing with the methods based on traditional Bayesian network and BP neural network, respectively, when the historical data is sufficient, and the results are more superior to the above two when the historical data is insufficient.


2012 ◽  
Vol 33 (3-4) ◽  
pp. 319-326 ◽  
Author(s):  
Susanne Böll ◽  
Ursina Tobler ◽  
Corina C. Geiger ◽  
Günter Hansbauer ◽  
Benedikt R. Schmidt

In three Bavarian populations of Alytes obstetricans that were studied for the occurrence of Batrachochytrium dendrobatidis, the pathogen was detected. This is the first account of chytridiomycosis in Bavaria, Germany. Infected tadpoles had low infection loads mostly of 101 to 102 genome equivalents. Under high-density rearing conditions in the laboratory mortality rates were high after metamorphosis. Some individuals, however, showed no infection with Bd, while others survived metamorphosis in spite of low Bd loads. A new approach was chosen to get historical data on Bd occurrence in one of these populations: skeletochronological phalanx cross sections of 248 individuals that had been collected in the late 1980s were used to analyse the epidermis for chytrid sporangia. No sporangia were detected, thus we conclude that this population was not affected by Bd in the past.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
A. Murari ◽  
E. Peluso ◽  
M. Lungaroni ◽  
P. Gaudio ◽  
J. Vega ◽  
...  

AbstractIn recent years, the techniques of the exact sciences have been applied to the analysis of increasingly complex and non-linear systems. The related uncertainties and the large amounts of data available have progressively shown the limits of the traditional hypothesis driven methods, based on first principle theories. Therefore, a new approach of data driven theory formulation has been developed. It is based on the manipulation of symbols with genetic computing and it is meant to complement traditional procedures, by exploring large datasets to find the most suitable mathematical models to interpret them. The paper reports on the vast amounts of numerical tests that have shown the potential of the new techniques to provide very useful insights in various studies, ranging from the formulation of scaling laws to the original identification of the most appropriate dimensionless variables to investigate a given system. The application to some of the most complex experiments in physics, in particular thermonuclear plasmas, has proved the capability of the methodology to address real problems, even highly nonlinear and practically important ones such as catastrophic instabilities. The proposed tools are therefore being increasingly used in various fields of science and they constitute a very good set of techniques to bridge the gap between experiments, traditional data analysis and theory formulation.


Sign in / Sign up

Export Citation Format

Share Document