scholarly journals Inversion of controlled-source electromagnetic reflection responses

Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. F49-F57 ◽  
Author(s):  
Jürg Hunziker ◽  
Jan Thorbecke ◽  
Joeri Brackenhoff ◽  
Evert Slob

Marine controlled-source electromagnetic reflection responses can be retrieved by interferometry. These reflection responses are free of effects related to the water layer and the air above it and do not suffer from uncertainties related to the source position and orientation. Interferometry is a data-driven process requiring proper sampling of the electromagnetic field as well as knowledge of the material parameters at the receiver level, i.e., the sediment just below the receivers. We have inverted synthetic data sets using the reflection responses or the original electromagnetic fields with the goal of extracting the conductivity model of the subsurface. For the inversion, a genetic algorithm and a nonlinear conjugate-gradient algorithm were used. Our results show that an inversion of the reflection responses produces worse estimates of the vertical conductivity but superior estimates of the horizontal conductivity (especially for the reservoir) with respect to the original electromagnetic fields.

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


Author(s):  
Stefan Muench ◽  
Mike Roellig ◽  
Daniel Balzani

AbstractThis paper proposes a new method for in vivo and almost real-time identification of biomechanical properties of the human cornea based on non-contact tonometer data. Further goal is to demonstrate the method’s functionality based on synthetic data serving as reference. For this purpose, a finite element model of the human eye is constructed to synthetically generate full-field displacements from different data sets with keratoconus-like degradations. Then, a new approach based on the equilibrium gap method combined with a mechanical morphing approach is proposed and used to identify the material parameters from virtual test data sets. In a further step, random absolute noise is added to the virtual test data to investigate the sensitivity of the new approach to noise. As a result, the proposed method shows a relevant accuracy in identifying material parameters based on full-field displacements. At the same time, the method turns out to work almost in real time (order of a few minutes on a regular workstation) and is thus much faster than inverse problems solved by typical forward approaches. On the other hand, the method shows a noticeable sensitivity to rather small noise amplitudes rendering the method not accurate enough for the precise identification of individual parameter values. However, analysis show that the accuracy is sufficient for the identification of property ranges which might be related to diseased tissues. Thereby, the proposed approach turns out promising with view to diagnostic purposes.


2019 ◽  
Vol 220 (2) ◽  
pp. 1066-1077 ◽  
Author(s):  
Mohit Ayani ◽  
Lucy MacGregor ◽  
Subhashis Mallick

SUMMARY We developed a multi-objective optimization method for inverting marine controlled source electromagnetic data using a fast-non-dominated sorting genetic algorithm. Deterministic methods for inverting electromagnetic data rely on selecting weighting parameters to balance the data misfit with the model roughness and result in a single solution which do not provide means to assess the non-uniqueness associated with the inversion. Here, we propose a robust stochastic global search method that considers the objective as a two-component vector and simultaneously minimizes both components: data misfit and model roughness. By providing an estimate of the entire set of the Pareto-optimal solutions, the method allows a better assessment of non-uniqueness than deterministic methods. Since the computational expense of the method increases as the number of objectives and model parameters increase, we parallelized our algorithm to speed up the forward modelling calculations. Applying our inversion to noisy synthetic data sets generated from horizontally stratified earth models for both isotropic and anisotropic assumptions and for different measurement configurations, we demonstrate the accuracy of our method. By comparing the results of our inversion with the regularized genetic algorithm, we also demonstrate the necessity of casting this problem as a multi-objective optimization for a better assessment of uncertainty as compared to a scalar objective optimization method.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ruizhu Huang ◽  
Charlotte Soneson ◽  
Pierre-Luc Germain ◽  
Thomas S.B. Schmidt ◽  
Christian Von Mering ◽  
...  

AbstracttreeclimbR is for analyzing hierarchical trees of entities, such as phylogenies or cell types, at different resolutions. It proposes multiple candidates that capture the latent signal and pinpoints branches or leaves that contain features of interest, in a data-driven way. It outperforms currently available methods on synthetic data, and we highlight the approach on various applications, including microbiome and microRNA surveys as well as single-cell cytometry and RNA-seq datasets. With the emergence of various multi-resolution genomic datasets, treeclimbR provides a thorough inspection on entities across resolutions and gives additional flexibility to uncover biological associations.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 540
Author(s):  
Soodabeh Asadi ◽  
Janez Povh

This article uses the projected gradient method (PG) for a non-negative matrix factorization problem (NMF), where one or both matrix factors must have orthonormal columns or rows. We penalize the orthonormality constraints and apply the PG method via a block coordinate descent approach. This means that at a certain time one matrix factor is fixed and the other is updated by moving along the steepest descent direction computed from the penalized objective function and projecting onto the space of non-negative matrices. Our method is tested on two sets of synthetic data for various values of penalty parameters. The performance is compared to the well-known multiplicative update (MU) method from Ding (2006), and with a modified global convergent variant of the MU algorithm recently proposed by Mirzal (2014). We provide extensive numerical results coupled with appropriate visualizations, which demonstrate that our method is very competitive and usually outperforms the other two methods.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Sign in / Sign up

Export Citation Format

Share Document