Joint MT and CSEM data inversion using a multiplicative cost function approach

Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. F203-F214 ◽  
Author(s):  
A. Abubakar ◽  
M. Li ◽  
G. Pan ◽  
J. Liu ◽  
T. M. Habashy

We have developed an inversion algorithm for jointly inverting controlled-source electromagnetic (CSEM) data and magnetotelluric (MT) data. It is well known that CSEM and MT data provide complementary information about the subsurface resistivity distribution; hence, it is useful to derive earth resistivity models that simultaneously and consistently fit both data sets. Because we are dealing with a large-scale computational problem, one usually uses an iterative technique in which a predefined cost function is optimized. One of the issues of this simultaneous joint inversion approach is how to assign the relative weights on the CSEM and MT data in constructing the cost function. We propose a multiplicative cost function instead of the traditional additive one. This function does not require an a priori choice of the relative weights between these two data sets. It will adaptively put CSEM and MT data on equal footing in the inversion process. The inversion is accomplished with a regularized Gauss-Newton minimization scheme where the model parameters are forced to lie within their upper and lower bounds by a nonlinear transformation procedure. We use a line search scheme to enforce a reduction of the cost function at each iteration. We tested our joint inversion approach on synthetic and field data.

2019 ◽  
Vol 220 (3) ◽  
pp. 1995-2008 ◽  
Author(s):  
C Jordi ◽  
J Doetsch ◽  
T Günther ◽  
C Schmelzbach ◽  
H Maurer ◽  
...  

SUMMARY Structural joint inversion of several data sets on an irregular mesh requires appropriate coupling operators. To date, joint inversion algorithms are primarily designed for the use on regular rectilinear grids and impose structural similarity in the direct neighbourhood of a cell only. We introduce a novel scheme for calculating cross-gradient operators based on a correlation model that allows to define the operator size by imposing physical length scales. We demonstrate that the proposed cross-gradient operators are largely decoupled from the discretization of the modelling domain, which is particularly important for irregular meshes where cell sizes vary. Our structural joint inversion algorithm is applied to a synthetic electrical resistivity tomography and ground penetrating radar 3-D cross-well experiment aiming at imaging two anomalous bodies and extracting the parameter distribution of the geostatistical background models. For both tasks, joint inversion produced superior results compared with individual inversions of the two data sets. Finally, we applied structural joint inversion to two field data sets recorded over a karstified limestone area. By including geological a priori information via the correlation-based operators into the joint inversion, we find P-wave velocity and electrical resistivity tomograms that are in accordance with the expected subsurface geology.


2019 ◽  
Vol 133 ◽  
pp. 01009
Author(s):  
Tomasz Danek ◽  
Andrzej Leśniak ◽  
Katarzyna Miernik ◽  
Elżbieta Śledź

Pareto joint inversion for two or more data sets is an attractive and promising tool which eliminates target functions weighing and scaling, providing a set of acceptable solutions composing a Pareto front. In former author’s study MARIA (Modular Approach Robust Inversion Algorithm) was created as a flexible software based on global optimization engine (PSO) to obtain model parameters in process of Pareto joint inversion of two geophysical data sets. 2D magnetotelluric and gravity data were used for preliminary tests, but the software is ready to handle data from more than two geophysical methods. In this contribution, the authors’ magnetometric forward solver was implemented and integrated with MARIA. The gravity and magnetometry forward solver was verified on synthetic models. The tests were performed for different models of a dyke and showed, that even when the starting model is a homogeneous area without anomaly, it is possible to recover the shape of a small detail of the real model. Results showed that the group analysis of models on the Pareto front gives more information than the single best model. The final stage of interpretation is the raster map of Pareto front solutions analysis.


Geophysics ◽  
2008 ◽  
Vol 73 (4) ◽  
pp. F165-F177 ◽  
Author(s):  
A. Abubakar ◽  
T. M. Habashy ◽  
V. L. Druskin ◽  
L. Knizhnerman ◽  
D. Alumbaugh

We present 2.5D fast and rigorous forward and inversion algorithms for deep electromagnetic (EM) applications that include crosswell and controlled-source EM measurements. The forward algorithm is based on a finite-difference approach in which a multifrontal LU decomposition algorithm simulates multisource experiments at nearly the cost of simulating one single-source experiment for each frequency of operation. When the size of the linear system of equations is large, the use of this noniterative solver is impractical. Hence, we use the optimal grid technique to limit the number of unknowns in the forward problem. The inversion algorithm employs a regularized Gauss-Newton minimization approach with a multiplicative cost function. By using this multiplicative cost function, we do not need a priori data to determine the so-called regularization parameter in the optimization process, making the algorithm fully automated. The algorithm is equipped with two regularization cost functions that allow us to reconstruct either a smooth or a sharp conductivity image. To increase the robustness of the algorithm, we also constrain the minimization and use a line-search approach to guarantee the reduction of the cost function after each iteration. To demonstrate the pros and cons of the algorithm, we present synthetic and field data inversion results for crosswell and controlled-source EM measurements.


Author(s):  
Andrew Kurzawski ◽  
Ofodike A. Ezekoye

The heat-release rate (HRR) of a burning item is key to understanding the thermal effects of a fire on its surroundings. It is, perhaps, the most important variable used to characterize a burning fuel packet and is defined as the rate of energy released by the fire. HRR is typically determined using a gas measurement calorimetry method. In this study, an inversion algorithm is presented for conducting calorimeter on fires with unknown HRRs located in a compartment. The algorithm compares predictions of a forward model with observed heat fluxes from synthetically generated data sets to determine the HRR that minimizes a cost function. The effects of tuning a weighting parameter in the cost function and the issues associated with two different forward models of a compartment fire are examined.


Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. F239-F250 ◽  
Author(s):  
Fernando A. Monteiro Santos ◽  
Hesham M. El-Kaliouby

Joint or sequential inversion of direct current resistivity (DCR) and time-domain electromagnetic (TDEM) data commonly are performed for individual soundings assuming layered earth models. DCR and TDEM have different and complementary sensitivity to resistive and conductive structures, making them suitable methods for the application of joint inversion techniques. This potential joint inversion of DCR and TDEM methods has been used by several authors to reduce the ambiguities of the models calculated from each method separately. A new approach for joint inversion of these data sets, based on a laterally constrained algorithm, was found. The method was developed for the interpretation of soundings collected along a line over a 1D or 2D geology. The inversion algorithm was tested on two synthetic data sets, as well as on field data from Saudi Arabia. The results show that the algorithm is efficient and stable in producing quasi-2D models from DCR and TDEM data acquired in relatively complex environments.


2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3158
Author(s):  
Jian Yang ◽  
Xiaojuan Ban ◽  
Chunxiao Xing

With the rapid development of mobile networks and smart terminals, mobile crowdsourcing has aroused the interest of relevant scholars and industries. In this paper, we propose a new solution to the problem of user selection in mobile crowdsourcing system. The existing user selection schemes mainly include: (1) find a subset of users to maximize crowdsourcing quality under a given budget constraint; (2) find a subset of users to minimize cost while meeting minimum crowdsourcing quality requirement. However, these solutions have deficiencies in selecting users to maximize the quality of service of the task and minimize costs. Inspired by the marginalism principle in economics, we wish to select a new user only when the marginal gain of the newly joined user is higher than the cost of payment and the marginal cost associated with integration. We modeled the scheme as a marginalism problem of mobile crowdsourcing user selection (MCUS-marginalism). We rigorously prove the MCUS-marginalism problem to be NP-hard, and propose a greedy random adaptive procedure with annealing randomness (GRASP-AR) to achieve maximize the gain and minimize the cost of the task. The effectiveness and efficiency of our proposed approaches are clearly verified by a large scale of experimental evaluations on both real-world and synthetic data sets.


2012 ◽  
Vol 235 ◽  
pp. 107-110
Author(s):  
Ying Ge Wo

This paper discusses the stabilization problem of a large-scale system via cutting off the connections or decreasing the degree of interconnections among its subsystems subject to a cost function. Under the assumption that the large system is unstable but its sub-systems are all stable, a sufficient condition about the degree of interconnection is presented via cutting off the connections or decreasing the degree of interconnections among its subsystems such that the new large system is stable. This condition can be expressed by linear matrix inequalities (LMIs). Based on this analysis, an optimal regulation for such controls is obtained ensures the minimization of the cost function. An illustrating example is also given to show the effectiveness of the proposed method.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. R49-R57 ◽  
Author(s):  
J. Germán Rubino ◽  
Danilo Velis

Prestack seismic data has been used in a new method to fully determine thin-bed properties, including the estimation of its thickness, P- and S-wave velocities, and density. The approach requires neither phase information nor normal-moveout (NMO) corrections, and assumes that the prestack seismic response of the thin layer can be isolated using an offset-dependent time window. We obtained the amplitude-versus-angle (AVA) response of the thin bed considering converted P-waves, S-waves, and all the associated multiples. We carried out the estimation of the thin-bed parameters in the frequency (amplitude spectrum) domain using simulated annealing. In contrast to using zero-offset data, the use of AVA data contributes to increase the robustness of this inverse problem under noisy conditions, as well as to significantly reduce its inherent nonuniqueness. To further reduce the nonuniqueness, and as a means to incorporate a priori geologic or geophysical information (e.g., well-log data), we imposed appropriate bounding constraints to the parameters of the media lying above and below the thin bed, which need not be known accurately. We tested the method by inverting noisy synthetic gathers corresponding to simple wedge models. In addition, we stochastically estimated the uncertainty of the solutions by inverting different data sets that share the same model parameters but are contaminated with different noise realizations. The results suggest that thin beds can be characterized fully with a moderate to high degree of confidence below tuning, even when using an approximate wavelet spectrum.


2009 ◽  
Vol 48 (2) ◽  
pp. 317-329 ◽  
Author(s):  
Lance O’Steen ◽  
David Werth

Abstract It is shown that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root-mean-square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). It is found that the optimization can be done with relatively modest computer resources; therefore, operational implementation is possible. The overall number of simulations needed to obtain a specific reduction of the cost function is found to depend strongly on the procedure used to perturb the “child” parameters relative to their “parents” within the evolutionary algorithm. In addition, the choice of meteorological variables that are included in the rms error and their relative weighting are also found to be important factors in the optimization.


Sign in / Sign up

Export Citation Format

Share Document