scholarly journals Generalized least-squares solutions to quasi-linear inverse problems with a priori information.

1982 ◽  
Vol 30 (6) ◽  
pp. 451-468 ◽  
Author(s):  
Mitsuhiro MATSU'URA ◽  
Naoshi HIRATA
Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 818-829 ◽  
Author(s):  
John C. VanDecar ◽  
Roel Snieder

It is not uncommon now for geophysical inverse problems to be parameterized by [Formula: see text] to [Formula: see text] unknowns associated with upwards of [Formula: see text] to [Formula: see text] data constraints. The matrix problem defining the linearization of such a system (e.g., [Formula: see text]m = b) is usually solved with a least‐squares criterion [Formula: see text]. The size of the matrix, however, discourages the direct solution of the system and researchers often turn to iterative techniques such as the method of conjugate gradients to obtain an estimate of the least‐squares solution. These iterative methods take advantage of the sparseness of [Formula: see text], which often has as few as 2–3 percent of its elements nonzero, and do not require the calculation (or storage) of the matrix [Formula: see text]. Although there are usually many more data constraints than unknowns, these problems are, in general, underdetermined and therefore require some sort of regularization to obtain a solution. When the regularization is simple damping, the conjugate gradients method tends to converge in relatively few iterations. However, when derivative‐type regularization is applied (first derivative constraints to obtain the flattest model that fits the data; second derivative to obtain the smoothest), the convergence of parts of the solution may be drastically inhibited. In a series of 1-D examples and a synthetic 2-D crosshole tomography example, we demonstrate this problem and also suggest a method of accelerating the convergence through the preconditioning of the conjugate gradient search directions. We derive a 1-D preconditioning operator for the case of first derivative regularization using a WKBJ approximation. We have found that preconditioning can reduce the number of iterations necessary to obtain satisfactory convergence by up to an order of magnitude. The conclusions we present are also relevant to Bayesian inversion, where a smoothness constraint is imposed through an a priori covariance of the model.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. R101-R111 ◽  
Author(s):  
Thomas Mejer Hansen ◽  
Andre G. Journel ◽  
Albert Tarantola ◽  
Klaus Mosegaard

Inverse problems in geophysics require the introduction of complex a priori information and are solved using computationally expensive Monte Carlo techniques (where large portions of the model space are explored). The geostatistical method allows for fast integration of complex a priori information in the form of covariance functions and training images. We combine geostatistical methods and inverse problem theory to generate realizations of the posterior probability density function of any Gaussian linear inverse problem, honoring a priori information in the form of a covariance function describing the spatial connectivity of the model space parameters. This is achieved using sequential Gaussian simulation, a well-known, noniterative geostatisticalmethod for generating samples of a Gaussian random field with a given covariance function. This work is a contribution to both linear inverse problem theory and geostatistics. Our main result is an efficient method to generate realizations, actual solutions rather than the conventional least-squares-based approach, to any Gaussian linear inverse problem using a noniterative method. The sequential approach to solving linear and weakly nonlinear problems is computationally efficient compared with traditional least-squares-based inversion. The sequential approach also allows one to solve the inverse problem in only a small part of the model space while conditioned to all available data. From a geostatistical point of view, the method can be used to condition realizations of Gaussian random fields to the possibly noisy linear average observations of the model space.


2021 ◽  
pp. 1-26
Author(s):  
Roman Z. Morawski

Abstract It is argued, in this paper, that the core operation underlying any measurement – the inverse modelling under uncertainty – is equivalent to quantitative abductive reasoning which consists in the selection of the best estimate of a measurand (i.e. a quantity to be measured) in a set of admissible solutions, using a priori information: (i) on the measurand, (ii) on the measuring system coupled with an object under measurement, and (iii) on the influence of the environment including the user of the measurement results. There are two key premises of this claim: a systematic interpretation of measurement in terms of inverse problems, proposed earlier by the author, and a logical link between inverse problems and abduction, identified by the Finnish philosopher of science Ilkka Niiniluoto. The title claim of this paper is illustrated with an expanded example of measuring optical spectrum by means of a low-resolution spectrometer.


Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 982
Author(s):  
Marta Gatto ◽  
Fabio Marcuzzi

In this paper we analyze the bias in a general linear least-squares parameter estimation problem, when it is caused by deterministic variables that have not been included in the model. We propose a method to substantially reduce this bias, under the hypothesis that some a-priori information on the magnitude of the modelled and unmodelled components of the model is known. We call this method Unbiased Least-Squares (ULS) parameter estimation and present here its essential properties and some numerical results on an applied example.


Sign in / Sign up

Export Citation Format

Share Document