Asymptotic Parameter Estimation via Implicit Averaging on a Nonlinear Extended System

2003 ◽  
Vol 125 (1) ◽  
pp. 11-18 ◽  
Author(s):  
Anindya Chatterjee ◽  
Joseph P. Cusumano

We present an observer for parameter estimation in nonlinear oscillating systems (periodic, quasiperiodic or chaotic). The observer requires measurements of generalized displacements. It estimates generalized velocities on a fast time scale and unknown parameters on a slow time scale, with time scale separation specified by a small parameter ε. Parameter estimates converge asymptotically like e−εt where t is time, provided the data is such that a certain averaged coefficient matrix is positive definite. The method is robust: small model errors and noise cause small estimation errors. The effects of zero mean, high frequency noise can be reduced by faster sampling. Several numerical examples show the effectiveness of the method.

Author(s):  
Anindya Chatterjee ◽  
Joseph P. Cusumano

Abstract We present a new observer-based method for parameter estimation for nonlinear oscillatory mechanical systems where the unknown parameters appear linearly (they may each be multiplied by bounded and Lipschitz continuous but otherwise arbitrary, possibly nonlinear, functions of the oscillatory state variables and time). The oscillations in the system may be periodic, quasiperiodic or chaotic. The method is also applicable to systems where the parameters appear nonlinearly, provided a good initial estimate of the parameter is available. The observer requires measurements of displacements. It estimates velocities on a fast time scale, and the unknown parameters on a slow time scale. The fast and slow time scales are governed by a single small parameter ϵ. Using asymptotic methods including the method of averaging, it is shown that the observer’s estimates of the unknown parameters converge like e−ϵt where t is time, provided the system response is such that the coefficient-functions of the unknown parameters are not close to being linearly dependent. It is also shown that the method is robust in that small errors in the model cause small errors in the parameter estimates. A numerical example is provided to demonstrate the effectiveness of the method.


Author(s):  
James R. McCusker ◽  
Kourosh Danai

A method of parameter estimation was recently introduced that separately estimates each parameter of the dynamic model [1]. In this method, regions coined as parameter signatures, are identified in the time-scale domain wherein the prediction error can be attributed to the error of a single model parameter. Based on these single-parameter associations, individual model parameters can then be estimated for iterative estimation. Relative to nonlinear least squares, the proposed Parameter Signature Isolation Method (PARSIM) has two distinct attributes. One attribute of PARSIM is to leave the estimation of a parameter dormant when a parameter signature cannot be extracted for it. Another attribute is independence from the contour of the prediction error. The first attribute could cause erroneous parameter estimates, when the parameters are not adapted continually. The second attribute, on the other hand, can provide a safeguard against local minima entrapments. These attributes motivate integrating PARSIM with a method, like nonlinear least-squares, that is less prone to dormancy of parameter estimates. The paper demonstrates the merit of the proposed integrated approach in application to a difficult estimation problem.


SPE Journal ◽  
2009 ◽  
Vol 15 (01) ◽  
pp. 18-30 ◽  
Author(s):  
J.R.. R. Rommelse ◽  
J.D.. D. Jansen ◽  
A.W.. W. Heemink

Summary The discrepancy between observed measurements and model predictions can be used to improve either the model output alone or both the model output and the parameters that underlie the model. In the case of parameter estimation, methods exist that can efficiently calculate the gradient of the discrepancy to changes in the parameters, assuming that there are no uncertainties in addition to the unknown parameters. In the case of general nonlinear parameter estimation, many different parameter sets exist that locally minimize the discrepancy. In this case, the gradient must be regularized before it can be used by gradient-based minimization algorithms. This article proposes a method for calculating a gradient in the presence of additional model errors through the use of representer expansions. The representers are data-driven basis functions that perform the regularization. All available data can be used during every iteration of the minimization scheme, as is the case in the classical representer method (RM). However, the method proposed here also allows adaptive selection of different portions of the data during different iterations to reduce computation time; the user now has the freedom to choose the number of basis functions and revise this choice at every iteration. The method also differs from the classic RM by the introduction of measurement representers in addition to state, adjoint, and parameter representers and by the fact that no correction terms are calculated. Unlike the classic RM, where the minimization scheme is prescribed, the RM proposed here provides a gradient that can be used in any minimization algorithm. The applicability of the modified method is illustrated with a synthetic example to estimate permeability values in an inverted- five-spot waterflooding problem.


1983 ◽  
Vol 105 (1) ◽  
pp. 50-52
Author(s):  
C. Batur

To identify the dynamics of mechanical systems, the usual practice is to assume a certain model structure and try to estimate the unknown parameters of this model on the basis of input output observations. For mechanical systems operating under noisy industrial conditions, the number of unknowns of the problem exceeds the number of equations available. It is then inevitable that certain assumptions must be made on the unknown disturbances. This paper assumes that the only reliable feature of the disturbance is its independence of input. This yields a set of assumptions in excess of the minimal requirements and an endeavor has been made to exploit this excess to minimize the parameter estimation errors. Th resulting algorithm is similar to that of the Two Stage Least Squares method [1].


Sign in / Sign up

Export Citation Format

Share Document