scholarly journals Rapid estimation of earthquake locations using waveform traveltimes

2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.

Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. R121-R131 ◽  
Author(s):  
Hu Jin ◽  
George A. McMechan

A 2D velocity model was estimated by tomographic imaging of overlapping focusing operators that contain one-way traveltimes, from common-focus points to receivers in an aperture along the earth’s surface. The stability and efficiency of convergence and the quality of the resulting models were improved by a sequence of ideas. We used a hybrid parameterization that has an underlying grid, upon which is superimposed a flexible, pseudolayer model. We first solved for the low-wavenumber parts of the model (approximating it as constant-velocity pseudo layers), then we allowed intermediate wavenumbers (allowing the layers to have linear velocity gradients), and finally did unconstrained iterations to add the highest wavenumber details. Layer boundaries were implicitly defined by focus points that align along virtual marker (reflector) horizons. Each focus point sampled an area bounded by the first and last rays in the data aperture at the surface; this reduced the amount of computation and the size of the effective null space of the solution. Model updates were performed simultaneously for the velocities and the local focus point positions in two steps; local estimates were performed independently by amplitude semblance for each focusing operator within its area of dependence, followed by a tomographic weighting of the local estimates into a global solution for each grid point, subject to the constraints of the parameterization used at that iteration. The system of tomographic equations was solved by simultaneous iterative reconstruction, which is equivalent to a least-squares solution, but it does not involve a matrix inversion. The algorithm was successfully applied to synthetic data for a salt dome model using a constant-velocity starting model; after a total of 25 iterations, the velocity error was [Formula: see text] and the final mean focal point position error was [Formula: see text] wavelength.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


2020 ◽  
Vol 221 (1) ◽  
pp. 352-370
Author(s):  
N Karamzadeh ◽  
S Heimann ◽  
T Dahm ◽  
F Krüger

SUMMARY A collection of earthquake sources recorded at a single station, under specific conditions, are considered as a source array (SA), that is interpreted as if earthquake sources originate at the station location and are recorded at the source location. Then, array processing methods, that is array beamforming, are applicable to analyse the recorded signals. A possible application is to use source array multiple event techniques to locate and characterize near-source scatterers and structural interfaces. In this work the aim is to facilitate the use of earthquake source arrays by presenting an automatic search algorithm to configure the source array elements. We developed a procedure to search for an optimal source array element distribution given an earthquake catalogue including accurate origin time and hypocentre locations. The objective function of the optimization process can be flexibly defined for each application to ensure the prerequisites (criteria) of making a source array. We formulated four quantitative criteria as subfunctions and used the weighted sum technique to combine them in one single scalar function. The criteria are: (1) to control the accuracy of the slowness vector estimation using the time domain beamforming method, (2) to measure the waveform coherency of the array elements, (3) to select events with lower location error and (4) to select traces with high energy of specific phases, that is, sp- or ps-phases. The proposed procedure is verified using synthetic data as well as real examples for the Vogtland region in Northwest Bohemia. We discussed the possible application of the optimized source arrays to identify the location of scatterers in the velocity model by presenting a synthetic test and an example using real waveforms.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB191-WB207 ◽  
Author(s):  
Yaxun Tang ◽  
Biondo Biondi

We present a new strategy for efficient wave-equation migration-velocity analysis in complex geological settings. The proposed strategy has two main steps: simulating a new data set using an initial unfocused image and performing wavefield-based tomography using this data set. We demonstrated that the new data set can be synthesized by using generalized Born wavefield modeling for a specific target region where velocities are inaccurate. We also showed that the new data set can be much smaller than the original one because of the target-oriented modeling strategy, but it contains necessary velocity information for successful velocity analysis. These interesting features make this new data set suitable for target-oriented, fast and interactive velocity model-building. We demonstrate the performance of our method on both a synthetic data set and a field data set acquired from the Gulf of Mexico, where we update the subsalt velocity in a target-oriented fashion and obtain a subsalt image with improved continuities, signal-to-noise ratio and flattened angle-domain common-image gathers.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. S307-S314 ◽  
Author(s):  
Yibo Wang ◽  
Yikang Zheng ◽  
Qingfeng Xue ◽  
Xu Chang ◽  
Tong W. Fei ◽  
...  

In the implementation of migration of multiples, reverse time migration (RTM) is superior to other migration algorithms because it can handle steeply dipping structures and offer high-resolution images of the complex subsurface. However, the RTM results using two-way wave equation contain high-amplitude, low-frequency noise and false images generated by improper wave paths in migration velocity model with sharp velocity interfaces or strong velocity gradients. To improve the imaging quality in RTM of multiples, we separate the upgoing and downgoing waves in the propagation of source and receiver wavefields. A complex function involved with the Hilbert transform is used in wavefield decomposition. Our approach is cost effective and avoids the large storage of wavefield snapshots required by the conventional wavefield separation technique. We applied migration of multiples with wavefield decomposition on a simple two-layer model and the Sigsbee 2B synthetic data set. Our results demonstrate that the proposed approach can improve the image generated by migration of multiples significantly.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. R31-R42 ◽  
Author(s):  
Changsoo Shin ◽  
Dong-Joo Min

Although waveform inversion has been studied extensively since its beginning [Formula: see text] ago, applications to seismic field data have been limited, and most of those applications have been for global-seismology- or engineering-seismology-scale problems, not for exploration-scale data. As an alternative to classical waveform inversion, we propose the use of a new, objective function constructed by taking the logarithm of wavefields, allowing consideration of three types of objective function, namely, amplitude only, phase only, or both. In our wave form inversion, we estimate the source signature as well as the velocity structure by including functions of amplitudes and phases of the source signature in the objective function. We compute the steepest-descent directions by using a matrix formalism derived from a frequency-domain, finite-element/finite-difference modeling technique. Our numerical algorithms are similar to those of reverse-time migration and waveform inversion based on the adjoint state of the wave equation. In order to demonstrate the practical applicability of our algorithm, we use a synthetic data set from the Marmousi model and seismic data collected from the Korean continental shelf. For noise-free synthetic data, the velocity structure produced by our inversion algorithm is closer to the true velocity structure than that obtained with conventional waveform inversion. When random noise is added, the inverted velocity model is also close to the true Marmousi model, but when frequencies below [Formula: see text] are removed from the data, the velocity structure is not as good as those for the noise-free and noisy data. For field data, we compare the time-domain synthetic seismograms generated for the velocity model inverted by our algorithm with real seismograms and find that the results show that our inversion algorithm reveals short-period features of the subsurface. Although we use wrapped phases in our examples, we still obtain reasonable results. We expect that if we were to use correctly unwrapped phases in the inversion algorithm, we would obtain better results.


Geophysics ◽  
1988 ◽  
Vol 53 (3) ◽  
pp. 334-345 ◽  
Author(s):  
Ernest R. Kanasewich ◽  
Suhas M. Phadke

In routine seismic processing, normal moveout (NMO) corrections are performed to enhance the reflected signals on common‐depth‐point or common‐midpoint stacked sections. However, when faults are present, reflection interference from the two blocks and the diffractions from their edges hinder fault location determination. Destruction of diffraction patterns by poststack migration further inhibits proper imaging of diffracting centers. This paper presents a new technique which helps in the interpretation of diffracting edges by concentrating the signal amplitudes from discontinuous diffracting points on seismic sections. It involves application to the data of moveout and amplitude corrections appropriate to an assumed diffractor location. The maximum diffraction amplitude occurs at the location of the receiver for which the diffracting discontinuity is beneath the source‐receiver midpoint. Since the amplitudes of these diffracted signals drop very rapidly on either side of the midpoint, an appropriate amplitude correction must be applied. Also, because the diffracted signals are present on all traces, one can use all of them to obtain a stacked trace for one possible diffractor location. Repetition of this procedure for diffractors assumed to be located beneath each surface point results in the common‐fault‐ point (CFP) stacked section, which shows diffractor locations by high amplitudes. The method was tested for synthetic data with and without noise. It proves to be quite effective, but is sensitive to the velocity model used for moveout corrections. Therefore, the velocity model obtained from NMO stacking is generally used for enhancing diffractor locations by stacking. Finally, the technique was applied to a field reflection data set from an area south of Princess well in Alberta.


Sign in / Sign up

Export Citation Format

Share Document