Applications of Transformation Theory for Nonlinear Integrable Systems to Linear Prediction Problems and Isospectral Deformations

1988 ◽  
pp. 505-515 ◽  
Author(s):  
Yoshimasa Nakamura
2016 ◽  
Vol 8 (2) ◽  
Author(s):  
Marc Wildi ◽  
Tucker McElroy

AbstractThe classic model-based paradigm in time series analysis is rooted in the Wold decomposition of the data-generating process into an uncorrelated white noise process. By design, this universal decomposition is indifferent to particular features of a specific prediction problem (e. g., forecasting or signal extraction) – or features driven by the priorities of the data-users. A single optimization principle (one-step ahead forecast error minimization) is proposed by this classical paradigm to address a plethora of prediction problems. In contrast, this paper proposes to reconcile prediction problem structures, user priorities, and optimization principles into a general framework whose scope encompasses the classic approach. We introduce the linear prediction problem (LPP), which in turn yields an LPP objective function. Then one can fit models via LPP minimization, or one can directly optimize the linear filter corresponding to the LPP, yielding the Direct Filter Approach. We provide theoretical results and practical algorithms for both applications of the LPP, and discuss the merits and limitations of each. Our empirical illustrations focus on trend estimation (low-pass filtering) and seasonal adjustment in real-time, i. e., constructing filters that depend only on present and past data.


1997 ◽  
Vol 34 (2) ◽  
pp. 458-476 ◽  
Author(s):  
M. D. Ruiz-Medina ◽  
M. J. Valderrama

We present a brief summary of some results related to deriving orthogonal representations of second-order random fields and its application in solving linear prediction problems. In the homogeneous and/or isotropic case, the spectral theory provides an orthogonal expansion in terms of spherical harmonics, called spectral decomposition (Yadrenko 1983). A prediction formula based on this orthogonal representation is shown. Finally, an application of this formula in solving a real-data problem related to prospective geophysics techniques is presented.


2007 ◽  
Vol 32 (1) ◽  
pp. 6-23 ◽  
Author(s):  
Shelby J. Haberman ◽  
Jiahe Qian

Statistical prediction problems often involve both a direct estimate of a true score and covariates of this true score. Given the criterion of mean squared error, this study determines the best linear predictor of the true score given the direct estimate and the covariates. Results yield an extension of Kelley’s formula for estimation of the true score to cases in which covariates are present. The best linear predictor is a weighted average of the direct estimate and of the linear regression of the direct estimate onto the covariates. The weights depend on the reliability of the direct estimate and on the multiple correlation of the true score with the covariates. One application of the best linear predictor is to use essay features provided by computer analysis and an observed holistic score of an essay provided by a human rater to approximate the true score corresponding to the holistic score.


1983 ◽  
Vol 91 ◽  
pp. 173-184 ◽  
Author(s):  
Sheu-San Lee

We shall discuss in this paper some problems in non-linear prediction theory. An Ornstein-Uhlenbeck process {U(t)} is taken to be a basic process, and we shall deal with stochastic processes X(t) that are transformed by functions f satisfying certain condition. Actually, observed processes are expressed in the form X(t) = f(U(t)). Our main problem is to obtain the best non-linear predictor X̂(t, τ) for X(t + τ), τ > 0, assuming that X(s), s ≤t, are observed. The predictor is therefore a non-linear functional of the values X(s), s ≤ t.


Sign in / Sign up

Export Citation Format

Share Document