THE USE OF A LEAST SQUARES METHOD FOR THE INTERPRETATION OF DATA FROM SEISMIC SURVEYS

Geophysics ◽  
1957 ◽  
Vol 22 (1) ◽  
pp. 9-21 ◽  
Author(s):  
A. E. Scheidegger ◽  
P. L. Willmore

During large‐scale seismic surveys it is often impossible to arrange shot points and seismometers in a simple pattern, so that the data cannot be treated as simply as those of small‐scale prospecting arrays. It is shown that the problem of reducing seismic observations from m shot points and n seismometers (where there is no simple pattern of arranging these) is equivalent to solving (m+n) normal equations with (m+n) unknowns. These normal equations are linear, the matrix of their coefficients is symmetric. The problem of inverting that matrix is solved here by the calculus of “Cracovians,” mathematical entities similar to matrices. When all the shots have been observed at all the seismometers, the solution can even be given generally. Otherwise, a certain amount of computation is necessary. An example is given.

Author(s):  
Andrii Sohor ◽  
◽  
Markiian Sohor ◽  

The most reliable method for calculating linear equations of the least squares principle, which can be used to solve incorrect geodetic problems, is based on matrix factorization, which is called a singular expansion. There are other methods that require less machine time and memory. But they are less effective in taking into account the errors of the source information, rounding errors and linear dependence. The methodology of such research is that for any matrix A and any two orthogonal matrices U and V there is a matrix Σ, which is determined from the ratio. The idea of a singular decomposition is that by choosing the right matrices U and V, you can convert most elements of the matrix to zero and make it diagonal with non-negative elements. The novelty and relevance of scientific solutions lies in the feasibility of using a singular decomposition of the matrix to obtain linear equations of the least squares method, which can be used to solve incorrect geodetic problems. The purpose of scientific research is to obtain a stable solution of parametric equations of corrections to the results of measurements in incorrect geodetic problems. Based on the performed research on the application of the singular decomposition method in solving incorrect geodetic problems, we can summarize the following. A singular expansion of a real matrix is any factorization of a matrix with orthogonal columns , an orthogonal matrix and a diagonal matrix , the elements of which are called singular numbers of the matrix , and the columns of matrices and left and right singular vectors. If the matrix has a full rank, then its solution will be unique and stable, which can be obtained by different methods. But the method of singular decomposition, in contrast to other methods, makes it possible to solve problems with incomplete rank. Research shows that the method of solving normal equations by sequential exclusion of unknowns (Gaussian method), which is quite common in geodesy, does not provide stable solutions for poorly conditioned or incorrect geodetic problems. Therefore, in the case of unstable systems of equations, it is proposed to use the method of singular matrix decomposition, which in computational mathematics is called SVD. The SVD singular decomposition method makes it possible to obtain stable solutions of both stable and unstable problems by nature. This possibility to solve incorrect geodetic problems is associated with the application of some limit τ, the choice of which can be made by the relative errors of the matrix of coefficients of parametric equations of corrections and the vector of results of geodetic measurements . Moreover, the solution of the system of normal equations obtained by the SVD method will have the shortest length. Thus, applying the apparatus of the singular decomposition of the matrix of coefficients of parametric equations of corrections to the results of geodetic measurements, we obtained new formulas for estimating the accuracy of the least squares method in solving incorrect geodetic problems. The derived formulas have a compact form and make it possible to easily calculate the elements and estimates of accuracy, almost ignoring the complex procedure of rotation of the matrix of coefficients of normal equations.


Author(s):  
Dmitriy Vladimirovich Ivanov ◽  

The article proposes the estimation of the gross output vector in the presence of errors in the matrix of direct costs and the final consumption vector. The article suggests the use of the total least squares method for estimating the gross output vector. Test cases showed that the accuracy of the proposed estimates of the gross output vector is higher than the accuracy of the estimates obtained using the classical least squares method (OLS).


1970 ◽  
Vol 26 (2) ◽  
pp. 295-296 ◽  
Author(s):  
K. Tichý

An appropriate choice of the function minimized permits linearization of the least-squares determination of the matrix which transforms the diffraction indices into the components of the reciprocal vector in the diffractometer φ-axis system of coordinates. The coefficients of the least-squares equations are based on diffraction indices and measured diffractometer angles of three or more non-coplanar setting reflexions.


1982 ◽  
Vol 15 ◽  
Author(s):  
J. H. Westsik ◽  
C. O. Harvey ◽  
F. P. Roberts ◽  
W. A. Ross ◽  
R. E. Thornhill

ABSTRACTDuring the past year we have conducted a modified MCC-1 leach test on a 145 kg block of a cast cement waste form. The leach vessel was a 200 liter Teflon®-lined drum and contained 97.5 liters of deionized water. The results of this large-scale leach test were compared with the results of standard MCC-1 tests (40 ml) on smaller samples of the same waste form. The ratio of leachate volumes between the large and small scale tests was 2500 and the ratio of sample masses was 150,000. The cast cement samples for both tests contained plutonium-doped incinerator ash.The leachates from these tests were analyzed for both plutonium and the matrix elements. Evaluation of plutonium plateout in the large-scale test indicated that the majority of the plutonium leached from the samples deposits onto vessel walls and little (<3 × 10−12M) remains in solution. Comparison of elemental concentrations in the leachates indicates some differences up to 5X in the concentration in the large- and small-scale tests. The differences are attributed to differences in the solubilities of Ca, Si, and Fe at pH ˜11.5 and at pH ˜12.5. The higher pH observed for the large-scale test is a result of the larger quantities of sodium in the large block of cement.


1952 ◽  
Vol 5 (2) ◽  
pp. 238
Author(s):  
PG Guest

A method of fitting polynomials is described in which the "normal" equations are obtained much more rapidly than the corresponding equations in the least-squares method. Efficiencies are found to be about 90 per cent. The method is illustrated by an example.


2011 ◽  
Vol 11 (5) ◽  
pp. 16185-16206
Author(s):  
J. V. Bageston ◽  
C. M. Wrasse ◽  
P. P. Batista ◽  
R. E. Hibbins ◽  
D. C. Fritts ◽  
...  

Abstract. A mesospheric bore was observed with an all-sky airglow imager on the night of 9–10 July 2007 at Ferraz Station (62° S, 58° W), located on King George island on the Antarctic Peninsula. The observed bore propagated from southwest to northeast with a well defined wave front and a series of crests behind the main front. There was no evidence of dissipation during its propagation within the field of view. The wave parameters were obtained via a 2-D Fourier transform of the imager data providing a horizontal wavelength of 33 km, an observed period of 6 min, and a horizontal phase speed of 92 m s−1. Simultaneous mesospheric winds were measured with a medium frequency (MF) radar at Rothera Station (68° S, 68° W) and temperature profiles were obtained from the SABER instrument on the TIMED satellite. These wind and temperature profiles were used to estimate the propagation environment of the bore. A wavelet technique was applied to the wind in the plane of bore propagation at the OH emission height spanning three days centered on the bore event to define the dominant periodicities. Results revealed a dominance of near-inertial periods, and semi-diurnal and terdiurnal tides suggesting that the ducting structure enabling bore propagation occurred on large spatial scales. The observed tidal motions were used to reconstruct the winds employing a least-squares method, which were then compared to the observed ducting environment. Results suggest an important contribution of large-scale winds to the ducting structure, but with buoyancy frequency variations in the vertical also expected to be important. These results allow us to conclude that the bore was supported by a duct including contributions from both winds and temperature (or stability). A co-located airglow temperature imager operated simultaneously with the all-sky imager confirmed that the bore event was the dominant small-scale wave event during the analysis interval.


2016 ◽  
Vol 26 (01) ◽  
pp. 1750006 ◽  
Author(s):  
Xinsheng Wang ◽  
Chenxu Wang ◽  
Mingyan Yu

In recent years, model order reduction (MOR) of interconnect system has become an important technique to reduce the computation complexity and improve the verification efficiency in the nanometer VLSI design. The Krylov subspaces techniques in existing MOR methods are efficient, and have become the methods of choice for generating small-scale macro-models of the large-scale multi-port RCL networks that arise in VLSI interconnect analysis. Although the Krylov subspace projection-based MOR methods have been widely studied over the past decade in the electrical computer-aided design community, all of them do not provide a best optimal solution in a given order. In this paper, a minimum norm least-squares solution for MOR by Krylov subspace methods is proposed. The method is based on generalized inverse (or pseudo-inverse) theory. This enables a new criterion for MOR-based Krylov subspace projection methods. Two numerical examples are used to test the PRIMA method based on the method proposed in this paper as a standard model.


2013 ◽  
Vol 699 ◽  
pp. 885-892
Author(s):  
Le Min Gu

P-Least Squares (P-LS) method is Least Squares (LS) method promotion, based on the criteria of error -squares minimal to select parameter , namely satisfies following constitute the curve-fitting method. Due to the arbitrariness of the number , P-LS method has a wide field of application, when , P-LS approximation translated Chebyshev optimal approximation. This paper discusses the general principles of P-LS method; provides a way to realize the general solution of P-LS approximation. P-Least Squares method not only has significantly reduces the maximum error, also has solved the problems of Chebyshev approximation non-solution in some complex non-linear approximations,and also has the computation conveniently, can carry on the large-scale multi-data processing ability. This method is introduced by some examples unified in the materials science, the chemical engineering and the life body change.


2021 ◽  
Vol 2113 (1) ◽  
pp. 012083
Author(s):  
Xiaonan Liu ◽  
Lina Jing ◽  
Lin Han ◽  
Jie Gao

Abstract Solving large-scale linear equations is of great significance in many engineering fields, such as weather forecasting and bioengineering. The classical computer solves the linear equations, no matter adopting the elimination method or Kramer’s rule, the time required for solving is in a polynomial relationship with the scale of the equation system. With the advent of the era of big data, the integration of transistors is getting higher and higher. When the size of transistors is close to the order of electron diameter, quantum tunneling will occur, and Moore’s Law will not be followed. Therefore, the traditional computing model will not be able to meet the demand. In this paper, through an in-depth study of the classic HHL algorithm, a small-scale quantum circuit model is proposed to solve a 2×2 linear equations, and the circuit diagram and programming are used to simulate and verify on the Origin Quantum Platform. The fidelity under different parameter values reaches more than 90%. For the case where the matrix to be solved is a sparse matrix, the quantum algorithm has an exponential speed improvement over the best known classical algorithm.


Sign in / Sign up

Export Citation Format

Share Document