Marlim R3D phase 3: The marine magnetotelluric regional model and associated data set

2021 ◽  
Vol 40 (9) ◽  
pp. 686-692
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by earth models are essential to investigate several geologic problems. Marlim R3D (MR3D) is an open-source realistic earth modeling project for electromagnetic simulations of the postsalt reservoirs of the Brazilian offshore margin. In phase 3, we have conducted a 3D marine magnetotelluric (MMT) study with the finite-difference method to generate the synthetic magnetotelluric (MT) data set for the MR3D earth model. To that end, we upscaled the original controlled-source electromagnetic model to preserve all local-scale features, such as the thin-layer turbidite reservoirs, and to include several geologic regional features, such as the coastline, land topography, basement rocks representing the continental crust, and mantle rocks. Then, we simulated an MMT survey with 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To accurately represent the MMT model with a 329 × 329 × 200 km volume, we have produced a mesh with 161 × 136 × 242 cells (approximately 5.3 million cells). We computed the full MT and tipper tensor at 25 periods in the time range of 1–10,000 s. The data set, the model, and companion material are freely distributed for research or commercial use under the Creative Commons License at the Zenodo platform.

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


Geophysics ◽  
2012 ◽  
Vol 77 (5) ◽  
pp. WC81-WC93 ◽  
Author(s):  
Michal Malinowski ◽  
Ernst Schetselaar ◽  
Donald J. White

We applied seismic modeling for a detailed 3D geologic model of the Flin Flon mining camp (Canada) to address some imaging and interpretation issues related to a [Formula: see text] 3D survey acquired in the camp and described in a complementary paper (part 1). A 3D geologic volumetric model of the camp was created based on a compilation of geologic data constraints from drillholes, surface geologic mapping, interpretation of 2D seismic profiles, and 3D surface and grid geostatistical modeling techniques. The 3D modeling methodology was based on a hierarchical approach to account for the heterogeneous spatial distribution of geologic constraints. Elastic parameters were assigned within the model based on core sample measurements and correlation with the different lithologies. The phase-screen algorithm used for seismic modeling was validated against analytic and finite-difference solutions to ensure that it provided accurate amplitude-variation-with-offset behavior for dipping strata. Synthetic data were generated to form zero-offset (stack) volume and also a complete prestack data set using the geometry of the real 3D survey. We found that the ability to detect a clear signature of the volcanogenic massive sulfide with ore deposits is dependent on the mineralization type (pyrite versus pyrrhotite rich ore), especially when ore-host rock interaction is considered. In the presence of an increasing fraction of the host rhyolite rock within the model volume, the response from the lower impedance pyrrhotite ore is masked by that of the rhyolite. Migration tests showed that poststack migration effectively enhances noisy 3D DMO data and provides comparable results to more computationally expensive prestack time migration. Amplitude anomalies identified in the original 3D data, which were not predicted by our modeling, could represent potential exploration targets in an undeveloped part of the camp, assuming that our a priori earth model is sufficiently accurate.


Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 484-495 ◽  
Author(s):  
James L. Simmons ◽  
Milo M. Backus

Stacked seismic data are modeled as a superposition of simple‐interface and thin layer reflections. This parameterization permits a parsimonious blocky model of the impedance. The method is an alternative to the classical least‐mean‐squared‐error approach and is similar in spirit to minimum‐entropy deconvolution and sparse‐spike inversion, although much different, and simpler, in implementation. A specified number of events on a seismic trace are modeled (inverted) independently. The selected set of basis functions used to represent the data includes a simple interface and a suite of high and low impedance layers covering a range of layer thickness. The simple interface basis function is the seismic wavelet, which is presumed to be known. Each event is classified using a normalized zero‐lag crosscorrelation of the basis functions with the seismic trace. Modeled events are prevented from overlapping, thereby ensuring a sparse earth model. Real data results show that a portion of a shallow‐marine data set can be well modeled in the context of a sparse earth model. A maximum of 30 simple‐interface and thin‐layer reflections (per trace) model 65 stacked traces over the time range of 0.8–1.9 s. We use a time and space invariant, statistically derived, autoregressive, seismic wavelet estimate. Wavelet polarity is chosen such that the inversion correctly models the fluid anomaly signals as low impedance layers. For wavelet A, we make the common assumption of white reflectivity and achieve a data misfit that is 7.8 dB down. For wavelet B, we assume a blue reflectivity that has a 3 dB/octave increase with frequency and achieve an improved fit to the data. Wavelet B also produces a more accurate estimate of the layer thickness of a known gas reservoir (10–12 ms average thickness) than does wavelet A (15–17 ms average thickness). Our results are competitive with other approaches to impedance estimation and are obtained in a much simpler fashion.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


2010 ◽  
Vol 14 (3) ◽  
pp. 545-556 ◽  
Author(s):  
J. Rings ◽  
J. A. Huisman ◽  
H. Vereecken

Abstract. Coupled hydrogeophysical methods infer hydrological and petrophysical parameters directly from geophysical measurements. Widespread methods do not explicitly recognize uncertainty in parameter estimates. Therefore, we apply a sequential Bayesian framework that provides updates of state, parameters and their uncertainty whenever measurements become available. We have coupled a hydrological and an electrical resistivity tomography (ERT) forward code in a particle filtering framework. First, we analyze a synthetic data set of lysimeter infiltration monitored with ERT. In a second step, we apply the approach to field data measured during an infiltration event on a full-scale dike model. For the synthetic data, the water content distribution and the hydraulic conductivity are accurately estimated after a few time steps. For the field data, hydraulic parameters are successfully estimated from water content measurements made with spatial time domain reflectometry and ERT, and the development of their posterior distributions is shown.


Sign in / Sign up

Export Citation Format

Share Document