Stochastic structure-constrained image-guided inversion of geophysical data

Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. E89-E101 ◽  
Author(s):  
Jieyi Zhou ◽  
André Revil ◽  
Abderrahim Jardani

Inverse modeling of geophysical data involves the recovery of a subsurface structural model and the distribution of petrophysical properties. Independent information regarding the subsurface structure is usually available, with some uncertainty, from the expertise of a geologist and possibly accounting for sedimentary and tectonic processes. We have used the available structural information to construct a model covariance matrix and to perform a structure-constrained inversion of the geophysical data to obtain a geophysical tomogram [Formula: see text]. We have considered that the geologic models [Formula: see text] were built from random variables and were described with a priori probability density function in the Bayesian framework. We have explored for the a posteriori probability density of the geologic models (i.e., the structure of the guiding image) with the Markov-chain Monte Carlo method, and we inverted at the same time, in a deterministic framework, the geophysical data. The sampling of the geologic models was performed in a stochastic framework, and each geologic model [Formula: see text] was used to invert the geophysical model [Formula: see text] using image-guided inversion. The adaptive metropolis algorithm was used to find the proposal distributions of [Formula: see text] reproducing the geophysical data and the geophysical information. In other words, we have tried to find a compromise between the a priori geologic information and the geophysical data to get, as end products, an updated geologic model and a geophysical tomogram. To demonstrate our approach, we used here electrical resistivity tomography as a technique to identify a correct geologic model and its a posteriori probability density. The approach was tested using one synthetic example (with three horizontal layers displaced by a normal fault) and one field case corresponding to a sinkhole in a three-layer structure. In both cases, we were able to select the most plausible geologic models that agreed with a priori information and the geophysical data.

2018 ◽  
pp. 45-49
Author(s):  
P. S. Galkin ◽  
V. N. Lagutkin

The algorithm of estimation and compensation of ionosphere influence on the measurement of parameters of the motion of space objects in two-position radar system with account of radio physical effects depending on elevation angles and the operating frequency is developed. It is assumed that the observed space object is traсked object, the orbital parameters which are well known, including the dependence of the velocity of the point on the orbit, and the uncertainty of the current coordinates of the object is caused mainly by forecast error of its position of in orbit (longitudinal error). To estimate the true position of space object in the orbit and the parameter, determining the influence of the ionosphere, a joint optimal processing of measurement of ranges to the object, obtained by two separated radars, taking into account the relevant ionospheric propagation delays and available a priori data on observable object trajectory. Estimation of unknown parameters are obtained on the basis of the criterion of maximum a posteriori probability density for these parameters, taking into account the measured and a priori data. The task of searching for maximum a posteriori probability density is reduced to task of searching of minimum weighted sum of squares, for the solution of which the cascade algorithm of iteration through is implemented in the work. Estimation accuracy of the position of space objects in orbit after compensation of ionosphere influence have been studied by Monte-Carlo method. Dependencies of mean square error of the position estimation of space objects upon elevation angles, operation frequency and solar activity have been obtained. It is shown that the effectiveness of the algorithm increases with the spatial base of measurements (for a fixed orbit of the object).


Author(s):  
J. H. Pacheco-Sánchez ◽  
R. D. Vera-Torres ◽  
R. Alejo

Bayesian learning is applied on two class systems. Partitioning a big sample made up of many elements of two classes of indistinguishable objects, we indistinctly pursue from 2 to 5 training sets called hypotheses in the probability field, with a plausible rate of object from each hypothesis. Objects are taken one by one from the sample. The basic aim faced is to predict one type of objects in the following occasion in which an agent takes one of them from the original sample to test it. We obtain the graph of a posteriori probability for each hypothesis of one of the objects. A prediction that the following object is specifically one of them is acquired in one probability curve by means of training previously accomplished. This methodology is applied on manufacture of glass bottles of two classes: good or crash. The main interest is to predict which machine produced one detected crash bottle because bottles turn to be indistinguishable when they are reviewed. This is solved by fixing a priori probabilities and taking into account all possible probability distribution combinations in the classes.


2002 ◽  
Vol 12 ◽  
pp. 255-256 ◽  
Author(s):  
J. Virtanen ◽  
K. Muinonen ◽  
E. Bowell

AbstractWe consider initial determination of orbits for trans-neptunian objects (TNOs), a topical theme because of the rapidly growing TNO population and the challenges in recovering lost TNOs. We apply the method of initial phase-space ranging of orbits to the poorly observed TNOs. The rigorous a posteriori probability density of the TNO orbital elements is examined using a Monte Carlo technique by varying the TNO topocentric ranges corresponding to the observation dates. We can optionally adopt a Bayesian approach to select the region of phase space containing the most plausible orbits. This is accomplished by incorporating semimajor axes, eccentricities, inclinations, and absolute magnitudes of multi-apparition TNOs as a priori information. The resulting a posteriori distributions permit ephemeris and ephemeris uncertainty prediction for TNO recovery observations.


Author(s):  
A. A. Lobaty ◽  
A. Y. Bumai

The problem of evaluating the information which is present in random signals from various sources-meters is considered. It is assumed that the random process which is evaluated and the meter output according mathematical description of the problem are vector random processes. In this case, the dimension of the vector of the measurement can be larger than the dimension of the vector of the process being evaluated. The analysis of analytical methods and algorithms of the estimation that based on the determination of the main probabilistic characteristics of a random process by both the a priori and the a posteriori methods with various optimality criteria is carried out. Based on the analysis, the problem of complexing of the meters of the random process is considered according the proposed criterion for the maximum of posterior verisimilitude, combining the criterion of maximum verisimilitude and the criterion of maximum the a posteriori probability, general methodology complexing is developed. Proposed example of the complexing shows the efficiency of the proposed method. This approach to the construction of the algorithms of the evaluation for multidimensional random processes allows to increase the accuracy of estimation, since it takes into account additional information and its complex processing.


Author(s):  
FREDRIK EKDAHL ◽  
PER PERSSON ◽  
PIA SANDVIK WIKLUND

Unreplicated factorial designs are widely used for designed experimentation in industry. In the analysis of designed experiments, the experimental factors influencing the response must be identified and separated from those that do not. An abundance of procedures intended to perform this selection have been introduced in the literature. A recent study indicated that the procedure due to Box and Meyer outperforms the lot of the other selection procedures in terms of efficiency and robustness. The procedure of Box and Meyer rests on a quasi-Bayesian foundation and utilizes generic domain knowledge, in the form of a common-for-all-factors a priori probability, that a factor significantly influences the response, to calculate an a posteriori probability for each factor. This paper suggests a strategy for introducing more elaborate domain knowledge about the experimental factors in the procedure of Box and Meyer, aiming to further improve its performance.


Author(s):  
Yu. I. Buryak ◽  
A. A. Skrynnikov

The article deals with the problem of reducing the volume of tests of complex systems by using a priori data on the reliability of their elements. At the preliminary stage, the a priori distribution of the probability of failure of the system as a whole is determined. To do this, the results of element tests are processed and the parameters of the a posteriori probability distribution of element failure are determined based on the Bayesian procedure. The type of distribution law (beta distribution) is chosen from the conjugacy condition. Statistical modeling of the system failure probability of a known structural-logical reliability scheme is performed for random values of the failure probabilities of each element, set in accordance with the obtained distribution law. The system failure probability distribution law is formed as a mixture of beta distributions; the advantage of this distribution law is a fairly high accuracy of the simulation data description and conjugacy to the binomial distribution. The parameters of a mixture of beta distributions are determined using the EM (Expectation-Maximization) algorithm. The quality of selection of the desired distribution density is checked using the nonparametric Kolmogorov criterion. When testing the system, after each experiment, the a posteriori density of the probability distribution is recalculated; it is represented as a mixture of beta distributions with a constant proportion of components. The parameters of each element of the mixture are easily determined by the results of the experiment. As a point Bayesian estimate, the average value calculated from the a posteriori distribution is taken, the confidence interval for a given confidence probability is found as the central interval. An example is given and the possibility of minimizing the number of tests is shown.


Sign in / Sign up

Export Citation Format

Share Document