scholarly journals On The Errors-In-Variables Model With Singular Dispersion Matrices

2014 ◽  
Vol 4 (1) ◽  
Author(s):  
B. Schaffrin ◽  
K. Snow ◽  
F. Neitzel

AbstractWhile the Errors-In-Variables (EIV) Model has been treated as a special case of the nonlinear Gauss- Helmert Model (GHM) for more than a century, it was only in 1980 that Golub and Van Loan showed how the Total Least-Squares (TLS) solution can be obtained from a certain minimum eigenvalue problem, assuming a particular relationship between the diagonal dispersion matrices for the observations involved in both the data vector and the data matrix. More general, but always nonsingular, dispersion matrices to generate the “properly weighted” TLS solution were only recently introduced by Schaffrin and Wieser, Fang, and Mahboub, among others. Here, the case of singular dispersion matrices is investigated, and algorithms are presented under a rank condition that indicates the existence of a unique TLS solution, thereby adding a new method to the existing literature on TLS adjustment. In contrast to more general “measurement error models,” the restriction to the EIV-Model still allows the derivation of (nonlinear) closed formulas for the weighted TLS solution. The practicality will be evidenced by an example from geodetic science, namely the over-determined similarity transformation between different coordinate estimates for a set of identical points.

2012 ◽  
Vol 2 (2) ◽  
pp. 98-106 ◽  
Author(s):  
B. Schaffrin ◽  
F. Neitzel ◽  
S. Uzun ◽  
V. Mahboub

Modifying Cadzow's algorithm to generate the optimal TLS-solution for the structured EIV-Model of a similarity transformationIn 2005, Felus and Schaffrin discussed the problem of a Structured Errors-in-Variables (EIV) Model in the context of a parameter adjustment for a classical similarity transformation. Their proposal, however, to perform a Total Least-Squares (TLS) adjustment, followed by a Cadzow step to imprint the proper structure, would not always guarantee the identity of this solution with the optimal Structured TLS solution, particularly in view of the residuals. Here, an attempt will be made to modify the Cadzow step in order to generate the optimal solution with the desired structure as it would, for instance, also result from a traditional LS-adjustment within an iteratively linearized Gauss-Helmert Model (GHM). Incidentally, this solution coincides with the (properly) Weighted TLS solution which does not need a Cadzow step.


2013 ◽  
Vol 462-463 ◽  
pp. 68-71
Author(s):  
Yu Ying Jiang ◽  
Qiang Liu

The measurement error models or EV(errors-in-variables) Models have been widely promoted in the field of statistics since 1877. According to the characteristics of the errors in variables, EV models can mainly be divided into three types: the additive model, the general measurement error model and berkson measurement error model. The emphases of researches in the EV models mainly focus on the effects of model estimation, hypothesis testing and model selection. In this paper, we concentrate on the research by conducted a systematic review of EV Models, in order to make a reference for researchers and practitioners.


Mathematics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 89
Author(s):  
Michal Pešta

Linear relations, containing measurement errors in input and output data, are considered. Parameters of these so-called errors-in-variables models can change at some unknown moment. The aim is to test whether such an unknown change has occurred or not. For instance, detecting a change in trend for a randomly spaced time series is a special case of the investigated framework. The designed changepoint tests are shown to be consistent and involve neither nuisance parameters nor tuning constants, which makes the testing procedures effortlessly applicable. A changepoint estimator is also introduced and its consistency is proved. A boundary issue is avoided, meaning that the changepoint can be detected when being close to the extremities of the observation regime. As a theoretical basis for the developed methods, a weak invariance principle for the smallest singular value of the data matrix is provided, assuming weakly dependent and non-stationary errors. The results are presented in a simulation study, which demonstrates computational efficiency of the techniques. The completely data-driven tests are illustrated through problems coming from calibration and insurance; however, the methodology can be applied to other areas such as clinical measurements, dietary assessment, computational psychometrics, or environmental toxicology as manifested in the paper.


Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. V345-V357 ◽  
Author(s):  
Nasser Kazemi

Given the noise-corrupted seismic recordings, blind deconvolution simultaneously solves for the reflectivity series and the wavelet. Blind deconvolution can be formulated as a fully perturbed linear regression model and solved by the total least-squares (TLS) algorithm. However, this algorithm performs poorly when the data matrix is a structured matrix and ill-conditioned. In blind deconvolution, the data matrix has a Toeplitz structure and is ill-conditioned. Accordingly, we develop a fully automatic single-channel blind-deconvolution algorithm to improve the performance of the TLS method. The proposed algorithm, called Toeplitz-structured sparse TLS, has no assumptions about the phase of the wavelet. However, it assumes that the reflectivity series is sparse. In addition, to reduce the model space and the number of unknowns, the algorithm benefits from the structural constraints on the data matrix. Our algorithm is an alternating minimization method and uses a generalized cross validation function to define the optimum regularization parameter automatically. Because the generalized cross validation function does not require any prior information about the noise level of the data, our approach is suitable for real-world applications. We validate the proposed technique using synthetic examples. In noise-free data, we achieve a near-optimal recovery of the wavelet and the reflectivity series. For noise-corrupted data with a moderate signal-to-noise ratio (S/N), we found that the algorithm successfully accounts for the noise in its model, resulting in a satisfactory performance. However, the results deteriorate as the S/N and the sparsity level of the data are decreased. We also successfully apply the algorithm to real data. The real-data examples come from 2D and 3D data sets of the Teapot Dome seismic survey.


2014 ◽  
Vol 30 (6) ◽  
pp. 1207-1246 ◽  
Author(s):  
Victoria Zinde-Walsh

This paper considers convolution equations that arise from problems such as measurement error and nonparametric regression with errors in variables with independence conditions. The equations are examined in spaces of generalized functions to account for possible singularities; this makes it possible to consider densities for arbitrary and not only absolutely continuous distributions, and to operate with Fourier transforms for polynomially growing regression functions. Results are derived for identification and well-posedness in the topology of generalized functions for the deconvolution problem and for some regression models. Conditions for consistency of plug-in estimation for these models are provided.


Author(s):  
A. F. Emery

Most practioners of inverse problems use least squares or maximum likelihood (MLE) to estimate parameters with the assumption that the errors are normally distributed. When there are errors both in the measured responses and in the independent variables, or in the model itself, more information is needed and these approaches may not lead to the best estimates. A review of the error-in-variables (EIV) models shows that other approaches are necessary and in some cases Bayesian inference is to be preferred.


Sign in / Sign up

Export Citation Format

Share Document