First-arrival picking with a U-net convolutional network

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. U45-U57 ◽  
Author(s):  
Lianlian Hu ◽  
Xiaodong Zheng ◽  
Yanting Duan ◽  
Xinfei Yan ◽  
Ying Hu ◽  
...  

In exploration geophysics, the first arrivals on data acquired under complicated near-surface conditions are often characterized by significant static corrections, weak energy, low signal-to-noise ratio, and dramatic phase change, and they are difficult to pick accurately with traditional automatic procedures. We have approached this problem by using a U-shaped fully convolutional network (U-net) to first-arrival picking, which is formulated as a binary segmentation problem. U-net has the ability to recognize inherent patterns of the first arrivals by combining attributes of arrivals in space and time on data of varying quality. An effective workflow based on U-net is presented for fast and accurate picking. A set of seismic waveform data and their corresponding first-arrival times are used to train the network in a supervised learning approach, then the trained model is used to detect the first arrivals for other seismic data. Our method is applied on one synthetic data set and three field data sets of low quality to identify the first arrivals. Results indicate that U-net only needs a few annotated samples for learning and is able to efficiently detect first-arrival times with high precision on complicated seismic data from a large survey. With the increasing training data of various first arrivals, a trained U-net has the potential to directly identify the first arrivals on new seismic data.

Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. V415-V423
Author(s):  
Yuanyuan Ma ◽  
Siyuan Cao ◽  
James W. Rector ◽  
Zhishuai Zhang

Arrival-time picking is an essential step in seismic processing and imaging. The explosion of seismic data volume requires automated arrival-time picking in a faster and more reliable way than existing methods. We have treated arrival-time picking as a binary image segmentation problem and used an improved pixel-wise convolutional network to pick arrival times automatically. Incorporating continuous spatial information in training enables us to preserve the arrival-time correlation between nearby traces, thus helping to reduce the risk of picking outliers that are common in a traditional trace-by-trace picking method. To train the network, we first convert seismic traces into gray-scale images. Image pixels before manually picked arrival times are labeled with zeros, and those after are tagged with ones. After training and validation, the network automatically learns representative features and generates a probability map to predict the arrival time. We apply the network to a field microseismic data set that was not used for training or validation to test the performance of the method. Then, we analyze the effects of training data volume and signal-to-noise ratio on our autopicking method. We also find the difference between 1D and 2D training data with borehole seismic data. Microseismic and borehole seismic data indicate the proposed network can improve efficiency and accuracy over traditional automated picking methods.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. V257-V274
Author(s):  
Necati Gülünay

The diminishing residual matrices (DRM) method can be used to surface-consistently decompose individual trace statics into source and receiver components. The statics to be decomposed may either be first-arrival times after the application of linear moveout associated with a consistent refractor as used in refraction statics or residual statics obtained by crosscorrelating individual traces with corresponding model traces (known as pilot traces) at the same common-midpoint (CMP) location. The DRM method is an iterative process like the well-known Gauss-Seidel (GS) method, but it uses only source and receiver terms. The DRM method differs from the GS method in that half of the average common shot and receiver terms are subtracted simultaneously from the observations at each iteration. DRM makes the under-constrained statics problem a constrained one by implicitly adding a new constraint, the equality of the contribution of shots and receivers to the solution. The average of the shot statics and the average of the receiver statics are equal in the DRM solution. The solution has the smallest difference between shot and receiver statics profiles when the number of shots and the number of receivers in the data are equal. In this case, it is also the smallest norm solution. The DRM method can be derived from the well-known simultaneous iterative reconstruction technique. Simple numerical tests as well as results obtained with a synthetic data set containing only the field statics verify that the DRM solution is the same as the linear inverse theory solution. Both algorithms can solve for the long-wavelength component of the statics if the individual picks contain them. Yet DRM method is much faster. Application of the method to the normal moveout-corrected CMP gathers on a 3D land survey for residual statics calculation found that pick-decompose-apply-stack stages of the DRM method need to be iterated. These iterations are needed because of time and waveform distortions of the pilot traces due to the individual trace statics. The distortions lessen at every external DRM iteration.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCB1-WCB10 ◽  
Author(s):  
Cédric Taillandier ◽  
Mark Noble ◽  
Hervé Chauris ◽  
Henri Calandra

Classical algorithms used for traveltime tomography are not necessarily well suited for handling very large seismic data sets or for taking advantage of current supercomputers. The classical approach of first-arrival traveltime tomography was revisited with the proposal of a simple gradient-based approach that avoids ray tracing and estimation of the Fréchet derivative matrix. The key point becomes the derivation of the gradient of the misfit function obtained by the adjoint-state technique. The adjoint-state method is very attractive from a numerical point of view because the associated cost is equivalent to the solution of the forward-modeling problem, whatever the size of the input data and the number of unknown velocity parameters. An application on a 2D synthetic data set demonstrated the ability of the algorithm to image near-surface velocities with strong vertical and lateral variations and revealed the potential of the method.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. J31-J41 ◽  
Author(s):  
James D. Irving ◽  
Michael D. Knoll ◽  
Rosemary J. Knight

To obtain the highest-resolution ray-based tomographic images from crosshole ground-penetrating radar (GPR) data, wide angular ray coverage of the region between the two boreholes is required. Unfortunately, at borehole spacings on the order of a few meters, high-angle traveltime data (i.e., traveltime data corresponding to transmitter-receiver angles greater than approximately 50° from the horizontal) are notoriously difficult to incorporate into crosshole GPR inversions. This is because (1) low signal-to-noise ratios make the accurate picking of first-arrival times at high angles extremely difficult, and (2) significant tomographic artifacts commonly appear when high- and low-angle ray data are inverted together. We address and overcome thesetwo issues for a crosshole GPR data example collected at the Boise Hydrogeophysical Research Site (BHRS). To estimate first-arrival times on noisy, high-angle gathers, we develop a robust and automatic picking strategy based on crosscorrelations, where reference waveforms are determined from the data through the stacking of common-ray-angle gathers. To overcome incompatibility issues between high- and low-angle data, we modify the standard tomographic inversion strategy to estimate, in addition to subsurface velocities, parameters that describe a traveltime ‘correction curve’ as a function of angle. Application of our modified inversion strategy, to both synthetic data and the BHRS data set, shows that it allows the successful incorporation of all available traveltime data to obtain significantly improved subsurface velocity images.


2021 ◽  
Vol 9 (3) ◽  
pp. 259
Author(s):  
Jizhong Wu ◽  
Bo Liu ◽  
Hao Zhang ◽  
Shumei He ◽  
Qianqian Yang

It is of great significance to detect faults correctly in continental sandstone reservoirs in the east of China to understand the distribution of remaining structural reservoirs for more efficient development operation. However, the majority of the faults is characterized by small displacements and unclear components, which makes it hard to recognize them in seismic data via traditional methods. We consider fault detection as an end-to-end binary image-segmentation problem of labeling a 3D seismic image with ones as faults and zeros elsewhere. Thus, we developed a fully convolutional network (FCN) based method to fault segmentation and used the synthetic seismic data to generate an accurate and sufficient training data set. The architecture of FCN is a modified version of the VGGNet (A convolutional neural network was named by Visual Geometry Group). Transforming fully connected layers into convolution layers enables a classification net to create a heatmap. Adding the deconvolution layers produces an efficient network for end-to-end dense learning. Herein, we took advantage of the fact that a fault binary image is highly biased with mostly zeros but only very limited ones on the faults. A balanced crossentropy loss function was defined to adjust the imbalance for optimizing the parameters of our FCN model. Ultimately, the FCN model was applied on real field data to propose that our FCN model can outperform conventional methods in fault predictions from seismic images in a more accurate and efficient manner.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCC79-WCC89 ◽  
Author(s):  
Hansruedi Maurer ◽  
Stewart Greenhalgh ◽  
Sabine Latzel

Analyses of synthetic frequency-domain acoustic waveform data provide new insights into the design and imaging capability of crosshole surveys. The full complex Fourier spectral data offer significantly more information than other data representations such as the amplitude, phase, or Hartley spectrum. Extensive eigenvalue analyses are used for further inspection of the information content offered by the seismic data. The goodness of different experimental configurations is investigated by varying the choice of (1) the frequencies, (2) the source and receiver spacings along the boreholes, and (3) the borehole separation. With only a few carefully chosen frequencies, a similar amount of information can be extracted from the seismic data as can be extracted with a much larger suite of equally spaced frequencies. Optimized data sets should include at least one very low frequencycomponent. The remaining frequencies should be chosen fromthe upper end of the spectrum available. This strategy proved to be applicable to a simple homogeneous and a very complex velocity model. Further tests are required, but it appears on the available evidence to be model independent. Source and receiver spacings also have an effect on the goodness of an experimental setup, but there are only minor benefits to denser sampling when the increment is much smaller than the shortest wavelength included in a data set. If the borehole separation becomes unfavorably large, the information content of the data is degraded, even when many frequencies and small source and receiver spacings are considered. The findings are based on eigenvalue analyses using the true velocity models. Because under realistic conditions the true model is not known, it is shown that the optimized data sets are sufficiently robust to allow the iterative inversion schemes to converge to the global minimum. This is demonstrated by means of tomographic inversions of several optimized data sets.


Geophysics ◽  
1992 ◽  
Vol 57 (3) ◽  
pp. 378-385 ◽  
Author(s):  
David F. Aldridge ◽  
Douglas W. Oldenburg

The classical wavefront method for interpreting seismic refraction arrival times is implemented on a digital computer. Modern finite‐difference propagation algorithms are used to downward continue recorded refraction arrival times through a near‐surface heterogeneous velocity structure. Two such subsurface traveltime fields need to be reconstructed from the arrivals observed on a forward and reverse geophone spread. The locus of a shallow refracting horizon is then defined by a simple imaging condition involving the reciprocal time (the traveltime between source positions at either end of the spread). Refractor velocity is estimated in a subsequent step by calculating the directional derivative of the reconstructed subsurface wavefronts along the imaged interface. The principle limitation of the technique arises from imprecise knowledge of the overburden velocity distribution. This velocity information must be obtained from uphole times, direct and reflected arrivals, shallow refractions, and borehole data. Analysis of synthetic data examples indicates that the technique can accurately image both synclinal and anticlinal structures. Finally, the method is tested, apparently successfully, on a shallow refraction data‐set acquired at an archeological site in western Crete.


2018 ◽  
Vol 16 (5) ◽  
pp. 507-526 ◽  
Author(s):  
Amin Khalaf ◽  
Christian Camerlynck ◽  
Nicolas Florsch ◽  
Ana Schneider

Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Sign in / Sign up

Export Citation Format

Share Document