scholarly journals Optical Flow Estimation Using Total Least Squares Variants

2017 ◽  
Vol 10 (3) ◽  
pp. 563-579 ◽  
Author(s):  
MARIA A. DE JESUS ◽  
VANIA V. ESTRELA

The problem of recursively approximating motion resulting from the Optical Flow (OF) in video thru Total Least Squares (TLS) techniques is addressed. TLS method solves an inconsistent system Gu=z , with G and z in error due to temporal/spatial derivatives, and nonlinearity, while the Ordinary Least Squares (OLS) model has noise only in z. Sources of difficulty involve the non-stationarity of the field, the ill-posedness, and the existence of noise in the data. Three ways of applying the TLS with different noise conjectures to the end problem are observed. First, the classical TLS (cTLS) is introduced, where the entries of the error matrices of each row of the augmented matrix [G;z] have zero mean and the same standard deviation. Next, the Generalized Total Least Squares (GTLS) is defined to provide a more stable solution, but it still has some problems. The Generalized Scaled TLS (GSTLS) has G and z tainted by different sources of additive zero-mean Gaussian noise and scaling [G;z] by nonsingular D and E, that is, D[G;z] E makes the errors iid with zero mean and a diagonal covariance matrix. The scaling is computed from some knowledge on the error distribution to improve the GTLS estimate. For moderate levels of additive noise, GSTLS outperforms the OLS, and the GTLS approaches. Although any TLS variant requires more computations than the OLS, it is still applicable with proper scaling of the data matrix.

2016 ◽  
Vol 3 (2) ◽  
pp. 87
Author(s):  
Richard Fiifi Annan ◽  
Yao Yevenyo Ziggah ◽  
John Ayer ◽  
Christian Amans Odutola

Spirit levelling has been the traditional means of determining Reduced Levels (RL’s) of points by most surveyors.  The assertion that the level instrument is the best instrument for determining elevations of points needs to be reviewed; this is because technological advancement is making the total station a very reliable tool for determining reduced levels of points. In order to achieve the objective of this research, reduced levels of stations were determined by a spirit level and a total station instrument. Ordinary Least Squares (OLS) and Total Least Squares (TLS) techniques were then applied to adjust the level network. Unlike OLS which considers errors only in the observation matrix, and adjusts observations in order to make the sum of its residuals minimum, TLS considers errors in both the observation matrix and the data matrix, thereby minimising the errors in both matrices. This was evident from the results obtained in this study such that OLS approximated the adjusted reduced levels, which compromises accuracy, whereas the opposite happened in the TLS adjustment results. Therefore, TLS was preferred to OLS and Analysis of Variance (ANOVA) was performed on the preferred TLS solution and the RL’s from the total station in order to ascertain how accurate the total station can be relative to the spirit level.


Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. V345-V357 ◽  
Author(s):  
Nasser Kazemi

Given the noise-corrupted seismic recordings, blind deconvolution simultaneously solves for the reflectivity series and the wavelet. Blind deconvolution can be formulated as a fully perturbed linear regression model and solved by the total least-squares (TLS) algorithm. However, this algorithm performs poorly when the data matrix is a structured matrix and ill-conditioned. In blind deconvolution, the data matrix has a Toeplitz structure and is ill-conditioned. Accordingly, we develop a fully automatic single-channel blind-deconvolution algorithm to improve the performance of the TLS method. The proposed algorithm, called Toeplitz-structured sparse TLS, has no assumptions about the phase of the wavelet. However, it assumes that the reflectivity series is sparse. In addition, to reduce the model space and the number of unknowns, the algorithm benefits from the structural constraints on the data matrix. Our algorithm is an alternating minimization method and uses a generalized cross validation function to define the optimum regularization parameter automatically. Because the generalized cross validation function does not require any prior information about the noise level of the data, our approach is suitable for real-world applications. We validate the proposed technique using synthetic examples. In noise-free data, we achieve a near-optimal recovery of the wavelet and the reflectivity series. For noise-corrupted data with a moderate signal-to-noise ratio (S/N), we found that the algorithm successfully accounts for the noise in its model, resulting in a satisfactory performance. However, the results deteriorate as the S/N and the sparsity level of the data are decreased. We also successfully apply the algorithm to real data. The real-data examples come from 2D and 3D data sets of the Teapot Dome seismic survey.


2004 ◽  
Vol 127 (1) ◽  
pp. 50-56 ◽  
Author(s):  
F. Xi ◽  
D. Nancoo ◽  
G. Knopf

In this paper a method is proposed to register three-dimensional line laser scanning data acquired in two different viewpoints. The proposed method is based on three-point position measurement by scanning three reference balls to determine the transformation between two views. Since there are errors in laser scanning data and sphere fitting, the two sets of three-point position measurement data at two different views are both subject to errors. For this reason, total least-squares methods are applied to determine the transformation, because they take into consideration the errors both at inputs and outputs. Simulations and experiment are carried to compare three methods, namely, ordinary least-squares method, unconstrained total least-squares method, and constrained total least-squares method. It is found that the last method gives the most accurate results.


Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 971
Author(s):  
Burkhard Schaffrin

In regression analysis, oftentimes a linear (or linearized) Gauss-Markov Model (GMM) is used to describe the relationship between certain unknown parameters and measurements taken to learn about them. As soon as there are more than enough data collected to determine a unique solution for the parameters, an estimation technique needs to be applied such as ‘Least-Squares adjustment’, for instance, which turns out to be optimal under a wide range of criteria. In this context, the matrix connecting the parameters with the observations is considered fully known, and the parameter vector is considered fully unknown. This, however, is not always the reality. Therefore, two modifications of the GMM have been considered, in particular. First, ‘stochastic prior information’ (p. i.) was added on the parameters, thereby creating the – still linear – Random Effects Model (REM) where the optimal determination of the parameters (random effects) is based on ‘Least Squares collocation’, showing higher precision as long as the p. i. was adequate (Wallace test). Secondly, the coefficient matrix was allowed to contain observed elements, thus leading to the – now nonlinear – Errors-In-Variables (EIV) Model. If not using iterative linearization, the optimal estimates for the parameters would be obtained by ‘Total Least Squares adjustment’ and with generally lower, but perhaps more realistic precision. Here the two concepts are combined, thus leading to the (nonlinear) ’EIV-Model with p. i.’, where an optimal estimation (resp. prediction) technique is developed under the name of ‘Total Least-Squares collocation’. At this stage, however, the covariance matrix of the data matrix – in vector form – is still being assumed to show a Kronecker product structure.


Sign in / Sign up

Export Citation Format

Share Document