On: “The Complex Wiener Filter” by Sven Treitel (GEOPHYSICS, April 1974, p. 169–173)

Geophysics ◽  
1975 ◽  
Vol 40 (2) ◽  
pp. 358-359
Author(s):  
A. J. Berkhout

In this publication Treitel determines the complex normal equations and proposes a solution method (Robinson’s block‐Toeplitz algorithm). With respect to these results, I would like to draw attention to the paper Berkhout (1973). In Appendix B of that paper the complex normal equations were also derived: [Formula: see text]for n = 0, 1, ⋯ , N. In (1a), [Formula: see text] is the autocorrelation function of a complex input signal x(t), [Formula: see text] is the complex least‐squares filter of duration (N + 1)Δt, [Formula: see text] is the complex desired output, and Δt represents the sampling interval. Note that in matrix notation expression (1a) equals expression (3) of Treitel’s paper. In the case of least‐squares inverse filtering, (1a) takes on a different form.

Geophysics ◽  
1982 ◽  
Vol 47 (2) ◽  
pp. 244-256 ◽  
Author(s):  
Yutaka Murakami ◽  
Toshihiro Uchida

The linear filter method is a powerful tool for the estimation of some convolution integrals which are encountered in many aspects of geophysical problems. The accuracy of the method is affected by two factors, apart from the sampling interval. Because of the slowly decaying oscillations in the filter tail, the tail must be truncated at some point. Because of the extensive numerical calculations required for the determination of the filter coefficients, the resultant filter cannot be free from round‐off errors. The Wiener filter analogy, pointed out by Koefoed and Dirks (1979), gives a straightforward and efficient answer to this problem. We show that the Wiener filter technique with the iterative application of the Levinson’s algorithm not only enhances the accuracy of the filter coefficients, but also greatly reduces the oscillations in the filter tail. Because of the very rapid decay of the filter tail, there is no need to truncate it.


Geophysics ◽  
1966 ◽  
Vol 31 (5) ◽  
pp. 917-926 ◽  
Author(s):  
Wayne T. Ford ◽  
James H. Hearne

Suppose we are given the autocorrelation function of a certain unknown sampled signal. Although a number of different signals might produce the given autocorrelation function, only one of these is minimum‐delay. Denoting this minimum‐delay unknown signal by the matrix K, the given autocorrelation may be written in the form K′K where K′ is the transpose of K. It is desired to determine approximately the inverse of this unknown signal K; that is, we wish to determine a vector X so that KX is as close to B′=(1, 0, 0, ⋯, 0) as possible in a least‐squares sense. If K were known, Rice shows that [Formula: see text] At first glance, the above formula appears useless as K′ is unknown. However, although K′ is indeed unknown, K′B has the form K′B=(c, 0, 0, ⋯, 0)′ where the scalar c simply plays the role of a scale factor. Thus, we determine X by simply selecting a convenient multiple of the first column of the inverse of the known matrix K′K. Although we do not present the details of the computer programming involved in the above calculation, we do present some simple examples to illustrate the process.


Author(s):  
Galina Vasil’evna Troshina ◽  
Alexander Aleksandrovich Voevoda

It was suggested to use the system model working in real time for an iterative method of the parameter estimation. It gives the chance to select a suitable input signal, and also to carry out the setup of the object parameters. The object modeling for a case when the system isn't affected by the measurement noises, and also for a case when an object is under the gaussian noise was executed in the MatLab environment. The superposition of two meanders with different periods and single amplitude is used as an input signal. The model represents the three-layer structure in the MatLab environment. On the most upper layer there are units corresponding to the simulation of an input signal, directly the object, the unit of the noise simulation and the unit for the parameter estimation. The second and the third layers correspond to the simulation of the iterative method of the least squares. The diagrams of the input and the output signals in the absence of noise and in the presence of noise are shown. The results of parameter estimation of a static object are given. According to the results of modeling, the algorithm works well even in the presence of significant measurement noise. To verify the correctness of the work of an algorithm the auxiliary computations have been performed and the diagrams of the gain behavior amount which is used in the parameter estimation procedure have been constructed. The entry conditions which are necessary for the work of an iterative method of the least squares are specified. The understanding of this algorithm functioning principles is a basis for its subsequent use for the parameter estimation of the multi-channel dynamic objects.


1977 ◽  
Vol 99 (2) ◽  
pp. 345-352 ◽  
Author(s):  
A. T. Chatas

The purpose of this paper is to indicate a method for estimating values of specified aquifer parameters from an investigation of the reservoir performance of an associated oilfield. To achieve this objective an analysis was made of the simultaneous solution of the material-balance and diffusivity equations, followed by an application of the method of least squares. Three analytical functions evolved, which in dimensionless form were numerically evaluated by computer and tabulated herein. Application of the proffered method requires the simultaneous solution of the three normal equations developed in the paper.


2009 ◽  
pp. 99-120
Author(s):  
John M. Lewis ◽  
S. Lakshmivarahan ◽  
Sudarshan Dhall

1935 ◽  
Vol 54 ◽  
pp. 12-16 ◽  
Author(s):  
A. C. Aitken

This paper concludes the study of fitting polynomials by Least Squares, treated in two previous papers. The problem being concerned with the minimum of a positive definite quadratic form, it makes for conciseness to use matrix notation. We shall therefore adopt the following conventions :—The n values of the variable x, of the data u0, u1, …, un−1, of certain polynomials qr(x) entering into the solution, and so on, will be regarded compositely as vectors. They will be imagined as having their components or elements disposed in column array, but when written in full will be written horizontally, to save space, enclosed by curled brackets. Row vectors, when written out in full, will be enclosed by square brackets. In the shorter notation we shall write, for example, u, x for column vectors, u′, x′ for the row vectors obtained by transposition. The vectors occurring in the problem will be the following:—


1965 ◽  
Vol 19 (1) ◽  
pp. 78-83
Author(s):  
Peter Wilson

Several methods for solving normal equations in least squares solutions are explained and the variance-covariance matrix is developed from the law of error propagation.


1983 ◽  
Vol 37 (4) ◽  
pp. 225-233 ◽  
Author(s):  
J. A. R. Blais

Givens transformations provide a direct method for solving linear least-squares estimation problems without forming the normal equations. This approach has been shown to be particularly advantageous in recursive situations because of characteristics related to data storage requirements, numerical stability and computational efficiency. The following discussion will concentrate on the problem of updating least-squares parameter and error estimates using Givens transformations. Special attention will be given to photogrammetric and geodetic applications.


Sign in / Sign up

Export Citation Format

Share Document