Localizing microseismic events on field data using a U-Net based convolutional neural network trained on synthetic data

Geophysics ◽  
2021 ◽  
pp. 1-47
Author(s):  
N. A. Vinard ◽  
G. G. Drijkoningen ◽  
D. J. Verschuur

Hydraulic fracturing plays an important role when it comes to the extraction of resources in unconventional reservoirs. The microseismic activity arising during hydraulic fracturing operations needs to be monitored to both improve productivity and to make decisions about mitigation measures. Recently, deep learning methods have been investigated to localize earthquakes given field-data waveforms as input. For optimal results, these methods require large field data sets that cover the entire region of interest. In practice, such data sets are often scarce. To overcome this shortcoming, we propose initially to use a (large) synthetic data set with full waveforms to train a U-Net that reconstructs the source location as a 3D Gaussian distribution. As field data set for our study we use data recorded during hydraulic fracturing operations in Texas. Synthetic waveforms were modelled using a velocity model from the site that was also used for a conventional diffraction-stacking (DS) approach. To increase the U-Nets’ ability to localize seismic events, we augmented the synthetic data with different techniques, including the addition of field noise. We select the best performing U-Net using 22 events that have previously been identified to be confidently localized by DS and apply that U-Net to all 1245 events. We compare our predicted locations to DS and the DS locations refined by a relative location (DSRL) method. The U-Net based locations are better constrained in depth compared to DS and the mean hypocenter difference with respect to DSRL locations is 163 meters. This shows potential for the use of synthetic data to complement or replace field data for training. Furthermore, after training, the method returns the source locations in near real-time given the full waveforms, alleviating the need to pick arrival times.

Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V213-V225 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Yangkang Chen ◽  
Shan Qu ◽  
Xiaofeng Zou ◽  
...  

We have designed a periodically varying code that can avoid the problem of the local coherency and make the interference distribute uniformly in a given range; hence, it was better at suppressing incoherent interference (blending noise) and preserving coherent useful signals compared with a random dithering code. We have also devised a new form of the iterative method to remove interference generated from the simultaneous source acquisition. In each iteration, we have estimated the interference using the blending operator following the proposed formula and then subtracted the interference from the pseudodeblended data. To further eliminate the incoherent interference and constrain the inversion, the data were then transformed to an auxiliary sparse domain for applying a thresholding operator. During the iterations, the threshold was decreased from the largest value to zero following an exponential function. The exponentially decreasing threshold aimed to gradually pass the deblended data to a more acceptable model subspace. Two numerically blended synthetic data sets and one numerically blended practical field data set from an ocean bottom cable were used to demonstrate the usefulness of our proposed method and the better performance of the periodically varying code over the traditional random dithering code.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. R31-R42 ◽  
Author(s):  
Changsoo Shin ◽  
Dong-Joo Min

Although waveform inversion has been studied extensively since its beginning [Formula: see text] ago, applications to seismic field data have been limited, and most of those applications have been for global-seismology- or engineering-seismology-scale problems, not for exploration-scale data. As an alternative to classical waveform inversion, we propose the use of a new, objective function constructed by taking the logarithm of wavefields, allowing consideration of three types of objective function, namely, amplitude only, phase only, or both. In our wave form inversion, we estimate the source signature as well as the velocity structure by including functions of amplitudes and phases of the source signature in the objective function. We compute the steepest-descent directions by using a matrix formalism derived from a frequency-domain, finite-element/finite-difference modeling technique. Our numerical algorithms are similar to those of reverse-time migration and waveform inversion based on the adjoint state of the wave equation. In order to demonstrate the practical applicability of our algorithm, we use a synthetic data set from the Marmousi model and seismic data collected from the Korean continental shelf. For noise-free synthetic data, the velocity structure produced by our inversion algorithm is closer to the true velocity structure than that obtained with conventional waveform inversion. When random noise is added, the inverted velocity model is also close to the true Marmousi model, but when frequencies below [Formula: see text] are removed from the data, the velocity structure is not as good as those for the noise-free and noisy data. For field data, we compare the time-domain synthetic seismograms generated for the velocity model inverted by our algorithm with real seismograms and find that the results show that our inversion algorithm reveals short-period features of the subsurface. Although we use wrapped phases in our examples, we still obtain reasonable results. We expect that if we were to use correctly unwrapped phases in the inversion algorithm, we would obtain better results.


2019 ◽  
Vol 220 (3) ◽  
pp. 2089-2104
Author(s):  
Òscar Calderón Agudo ◽  
Nuno Vieira da Silva ◽  
George Stronge ◽  
Michael Warner

SUMMARY The potential of full-waveform inversion (FWI) to recover high-resolution velocity models of the subsurface has been demonstrated in the last decades with its application to field data. But in certain geological scenarios, conventional FWI using the acoustic wave equation fails in recovering accurate models due to the presence of strong elastic effects, as the acoustic wave equation only accounts for compressional waves. This becomes more critical when dealing with land data sets, in which elastic effects are generated at the source and recorded directly by the receivers. In marine settings, in which sources and receivers are typically within the water layer, elastic effects are weaker but can be observed most easily as double mode conversions and through their effect on P-wave amplitudes. Ignoring these elastic effects can have a detrimental impact on the accuracy of the recovered velocity models, even in marine data sets. Ideally, the elastic wave equation should be used to model wave propagation, and FWI should aim to recover anisotropic models of velocity for P waves (vp) and S waves (vs). However, routine three-dimensional elastic FWI is still commercially impractical due to the elevated computational cost of modelling elastic wave propagation in regions with low S-wave velocity near the seabed. Moreover, elastic FWI using local optimization methods suffers from cross-talk between different inverted parameters. This generally leads to incorrect estimation of subsurface models, requiring an estimate of vp/vs that is rarely known beforehand. Here we illustrate how neglecting elasticity during FWI for a marine field data set that contains especially strong elastic heterogeneities can lead to an incorrect estimation of the P-wave velocity model. We then demonstrate a practical approach to mitigate elastic effects in 3-D yielding improved estimates, consisting of using a global inversion algorithm to estimate a model of vp/vs, employing matching filters to remove elastic effects from the field data, and performing acoustic FWI of the resulting data set. The quality of the recovered models is assessed by exploring the continuity of the events in the migrated sections and the fit of the latter with the recovered velocity model.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. S1-S9 ◽  
Author(s):  
Yibo Wang ◽  
Xu Chang ◽  
Hao Hu

Prestack reverse time migration (RTM) is usually regarded as an accurate imaging tool and has been widely used in exploration. Conventional RTM only uses primaries and treats free-surface related multiples as noise; however, free-surface related multiples can sometimes provide extra illumination of the subsurface, and this information could be used in migration procedures. There are many migration methods using free-surface related multiples, but most approaches need to predict multiples, which is time consuming and prone to error. We discovered a new RTM approach that uses the primaries and the free-surface related multiples simultaneously. Compared with migration methods that only use free-surface related multiples, the proposed approach can provide comparable migration results and does not need multiple predictions. In our approach, the source function in conventional RTM was replaced with recorded field data including primaries and free-surface related multiples, together with a synthetic wavelet; the back-propagated primaries in the conventional RTM were replaced with complete recorded field data. The imaging condition of the proposed approach was the same as the crosscorrelation imaging condition of conventional RTM. A three-layer velocity model with scatterers and the Sigsbee 2B synthetic data set were used for numerical experiments. The numerical results showed that the proposed approach can cover a wider range of the subsurface and provide better illumination compared with conventional RTM. The proposed approach was easy to implement and avoided tedious multiple prediction; it might be significant for general complex subsurface imaging.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. T265-T284 ◽  
Author(s):  
Joost van der Neut ◽  
Kees Wapenaar

Iterative substitution of the multidimensional Marchenko equation has been introduced recently to integrate internal multiple reflections in the seismic imaging process. In so-called Marchenko imaging, a macro velocity model of the subsurface is required to meet this objective. The model is used to back-propagate the data during the first iteration and to truncate integrals in time during all successive iterations. In case of an erroneous model, the image will be blurred (akin to conventional imaging) and artifacts may arise from inaccurate integral truncations. However, the scheme is still successful in removing artifacts from internal multiple reflections. Inspired by these observations, we rewrote the Marchenko equation, such that it can be applied early in a processing flow, without the need of a macro velocity model. Instead, we have required an estimate of the two-way traveltime surface of a selected horizon in the subsurface. We have introduced an approximation, such that adaptive subtraction can be applied. As a solution, we obtained a new data set, in which all interactions (primaries and multiples) with the part of the medium above the picked horizon had been eliminated. Unlike various other internal multiple elimination algorithms, the method can be applied at any specified target horizon, without having to resolve for internal multiples from shallower horizons. We successfully applied the method on synthetic data, where limitations were reported due to thin layers, diffraction-like discontinuities, and a finite acquisition aperture. A field data test was also performed, in which the kinematics of the predicted updates were demonstrated to match with internal multiples in the recorded data, but it appeared difficult to subtract them.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Sign in / Sign up

Export Citation Format

Share Document