Validation of Dynamic Models by Image Distances in the Time-Scale Domain

Author(s):  
Kourosh Danai ◽  
James R. McCusker ◽  
Todd Currier ◽  
David O. Kazmer

Model validation is the procedure whereby the fidelity of a model is evaluated. The traditional approaches to dynamic model validation either rely on the magnitude of the prediction error between the process observations and model outputs or consider the observations and model outputs as time series and use their similarity to assess the closeness of the model to the process. Here, we propose transforming these time series into the time-scale domain, to enhance their delineation, and using image distances between these transformed time series to assess the closeness of the model to the process. It is shown that the image distances provide a more consistent measure of model closeness than available from the magnitude of the prediction error.

Author(s):  
James R. McCusker ◽  
Kourosh Danai ◽  
David O. Kazmer

Model validation is the procedure whereby the fidelity of a model is evaluated. The traditional approaches to dynamic model validation consider model outputs and observations as time series and use their similarity to assess the closeness of the model to the process. A common measure of similarity between the two time series is the cumulative magnitude of their difference, as represented by the sum of squared (or absolute) prediction error. Another important measure is the similarity of shape of the time series, but that is not readily quantifiable and is often assessed by visual inspection. This paper proposes the continuous wavelet transform as the framework for characterizing the shape attributes of time series in the time-scale domain. The feature that enables this characterization is the multiscale differential capacity of continuous wavelet transforms. According to this feature, the surfaces obtained by certain wavelet transforms represent the derivatives of the time series and, hence, can be used to quantify shape attributes, such as the slopes and slope changes of the time series at different times and scales (frequencies). Three different measures are considered in this paper to quantify these shape attributes: (i) the Euclidean distance between the wavelet coefficients of the time series pairs to denote the cumulative difference between the wavelet coefficients, (ii) the weighted Euclidean distance to discount the difference of the wavelet coefficients that do not coincide in the time-scale plane, and (iii) the cumulative difference between the markedly different wavelet coefficients of the two time series to focus the measure on the pronounced shape attributes of the time series pairs. The effectiveness of these measures is evaluated first in a model validation scenario where the true form of the process is known. The proposed measures are then implemented in validation of two models of injection molding to evaluate the conformity of shapes of the models’ pressure estimates with the shapes of pressure measurements from various locations of the mold.


Author(s):  
Kourosh Danai ◽  
James R. McCusker

It is shown that delineation of output sensitivities with respect to model parameters in dynamic models can be enhanced in the time-scale domain. This enhanced differentiation of output sensitivities then provides the capacity to isolate regions of the time-scale plane wherein a single output sensitivity dominates the others. Due to this dominance, the prediction error can be attributed to the error of a single parameter at these regions so as to estimate each model parameter error separately. The proposed Parameter Signature Isolation Method (PARSIM) that uses these parameter error estimates for parameter adaptation has been found to have an adaptation precision comparable to that of the Gauss-Newton method for noise-free cases. PARSIM, however, appears to be less sensitive to input conditions, while offering the promise of more effective noise suppression by the capabilities available in the time-scale domain.


Author(s):  
Kourosh Danai ◽  
James R. McCusker

It is shown that output sensitivities of dynamic models can be better delineated in the time-scale domain. This enhanced delineation provides the capacity to isolate regions of the time-scale plane, coined as parameter signatures, wherein individual output sensitivities dominate the others. Due to this dominance, the prediction error can be attributed to the error of a single parameter at each parameter signature so as to enable estimation of each model parameter error separately. As a test of fidelity, the estimated parameter errors are evaluated in iterative parameter estimation in this paper. The proposed parameter signature isolation method (PARSIM) that uses the parameter error estimates for parameter estimation is shown to have an estimation precision comparable to that of the Gauss–Newton method. The transparency afforded by the parameter signatures, however, extends PARSIM’s features beyond rudimentary parameter estimation. One such potential feature is noise suppression by discounting the parameter error estimates obtained in the finer-scale (higher-frequency) regions of the time-scale plane. Another is the capacity to assess the observability of each output through the quality of parameter signatures it provides.


Author(s):  
James R. McCusker ◽  
Todd Currier ◽  
Kourosh Danai

It was shown recently that parameter estimation can be performed directly in the time-scale domain by isolating regions wherein the prediction error can be attributed to the error of individual dynamic model parameters [1]. Based on these single-parameter attributions of the prediction error, individual parameter errors can be estimated for iterative parameter estimation. A benefit of relying entirely on the time-scale domain for parameter estimation is the added capacity for noise suppression. This paper explores this benefit by introducing a noise compensation method that estimates the distortion by noise of the prediction error in the time-scale domain and incorporates it as a confidence factor when estimating individual parameter errors. This method is shown to further improve the estimated parameters beyond the time-filtering and denoising techniques developed for time-based estimation.


Automatica ◽  
2003 ◽  
Vol 39 (3) ◽  
pp. 403-415 ◽  
Author(s):  
Michel Gevers ◽  
Xavier Bombois ◽  
Benoı̂t Codrons ◽  
Gérard Scorletti ◽  
Brian D.O. Anderson

2020 ◽  
Vol 33 (12) ◽  
pp. 5155-5172
Author(s):  
Quentin Jamet ◽  
William K. Dewar ◽  
Nicolas Wienders ◽  
Bruno Deremble ◽  
Sally Close ◽  
...  

AbstractMechanisms driving the North Atlantic meridional overturning circulation (AMOC) variability at low frequency are of central interest for accurate climate predictions. Although the subpolar gyre region has been identified as a preferred place for generating climate time-scale signals, their southward propagation remains under consideration, complicating the interpretation of the observed time series provided by the Rapid Climate Change–Meridional Overturning Circulation and Heatflux Array–Western Boundary Time Series (RAPID–MOCHA–WBTS) program. In this study, we aim at disentangling the respective contribution of the local atmospheric forcing from signals of remote origin for the subtropical low-frequency AMOC variability. We analyze for this a set of four ensembles of a regional (20°S–55°N), eddy-resolving (1/12°) North Atlantic oceanic configuration, where surface forcing and open boundary conditions are alternatively permuted from fully varying (realistic) to yearly repeating signals. Their analysis reveals the predominance of local, atmospherically forced signal at interannual time scales (2–10 years), whereas signals imposed by the boundaries are responsible for the decadal (10–30 years) part of the spectrum. Due to this marked time-scale separation, we show that, although the intergyre region exhibits peculiarities, most of the subtropical AMOC variability can be understood as a linear superposition of these two signals. Finally, we find that the decadal-scale, boundary-forced AMOC variability has both northern and southern origins, although the former dominates over the latter, including at the site of the RAPID array (26.5°N).


2012 ◽  
Vol 22 (03) ◽  
pp. 1250044
Author(s):  
LANCE ONG-SIONG CO TING KEH ◽  
ANA MARIA AQUINO CHUPUNGCO ◽  
JOSE PERICO ESGUERRA

Three methods of nonlinear time series analysis, Lempel–Ziv complexity, prediction error and covariance complexity were employed to distinguish between the electroencephalograms (EEGs) of normal children, children with mild autism, and children with severe autism. Five EEG tracings per cluster of children aged three to seven medically diagnosed with mild, severe and no autism were used in the analysis. A general trend seen was that the EEGs of children with mild autism were significantly different from those with severe or no autism. No significant difference was observed between normal children and children with severe autism. Among the three methods used, the method that was best able to distinguish between EEG tracings of children with mild and severe autism was found to be the prediction error, with a t-Test confidence level of above 98%.


2021 ◽  
Vol 11 (12) ◽  
pp. 5615
Author(s):  
Łukasz Sobolewski ◽  
Wiesław Miczulski

Ensuring the best possible stability of UTC(k) (local time scale) and its compliance with the UTC scale (Universal Coordinated Time) forces predicting the [UTC-UTC(k)] deviations, the article presents the results of work on two methods of constructing time series (TS) for a neural network (NN), increasing the accuracy of UTC(k) prediction. In the first method, two prepared TSs are based on the deviations determined according to the UTC scale with a 5-day interval. In order to improve the accuracy of predicting the deviations, the PCHIP interpolating function is used in subsequent TSs, obtaining TS elements with a 1-day interval. A limitation in the improvement of prediction accuracy for these TS has been a too large prediction horizon. The introduction in 2012 of the additional UTC Rapid scale by BIPM makes it possible to shorten the prediction horizon, and the building of two TSs has been proposed according to the second method. Each of them consists of two subsets. The first subset is based on deviations determined according to the UTC scale, the second on the UTC Rapid scale. The research of the proposed TS in the field of predicting deviations for the Polish Timescale by means of GMDH-type NN shows that the best accuracy of predicting the deviations has been achieved for TS built according to the second method.


2021 ◽  
Author(s):  
Klaus B. Beckmann ◽  
Lennart Reimer

This monograph generalises, and extends, the classic dynamic models in conflict analysis (Lanchester 1916, Richardson 1919, Boulding 1962). Restrictions on parameters are relaxed to account for alliances and for peacekeeping. Incrementalist as well as stochastic versions of the model are reviewed. These extensions allow for a rich variety of patterns of dynamic conflict. Using Monte Carlo techniques as well as time series analyses based on GDELT data (for the Ethiopian-Eritreian war, 1998–2000), we also assess the empirical usefulness of the model. It turns out that linear dynamic models capture selected phases of the conflict quite well, offering a potential taxonomy for conflict dynamics. We also discuss a method for introducing a modicum of (bounded) rationality into models from this tradition.


Sign in / Sign up

Export Citation Format

Share Document