scholarly journals Earthquake source arrays: optimal configuration and applications in crustal structure studies

2020 ◽  
Vol 221 (1) ◽  
pp. 352-370
Author(s):  
N Karamzadeh ◽  
S Heimann ◽  
T Dahm ◽  
F Krüger

SUMMARY A collection of earthquake sources recorded at a single station, under specific conditions, are considered as a source array (SA), that is interpreted as if earthquake sources originate at the station location and are recorded at the source location. Then, array processing methods, that is array beamforming, are applicable to analyse the recorded signals. A possible application is to use source array multiple event techniques to locate and characterize near-source scatterers and structural interfaces. In this work the aim is to facilitate the use of earthquake source arrays by presenting an automatic search algorithm to configure the source array elements. We developed a procedure to search for an optimal source array element distribution given an earthquake catalogue including accurate origin time and hypocentre locations. The objective function of the optimization process can be flexibly defined for each application to ensure the prerequisites (criteria) of making a source array. We formulated four quantitative criteria as subfunctions and used the weighted sum technique to combine them in one single scalar function. The criteria are: (1) to control the accuracy of the slowness vector estimation using the time domain beamforming method, (2) to measure the waveform coherency of the array elements, (3) to select events with lower location error and (4) to select traces with high energy of specific phases, that is, sp- or ps-phases. The proposed procedure is verified using synthetic data as well as real examples for the Vogtland region in Northwest Bohemia. We discussed the possible application of the optimized source arrays to identify the location of scatterers in the velocity model by presenting a synthetic test and an example using real waveforms.

2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


2016 ◽  
Vol 58 (6) ◽  
Author(s):  
V. G. Krishna

<p>Vertical component record sections of local earthquake seismograms from a state-of-the-art Koyna-Warna digital seismograph network are assembled in the reduced time versus epicentral distance frame, similar to those obtained in seismic refraction profiling. The record sections obtained for an average source depth display the processed seismograms from nearly equal source depths with similar source mechanisms and recorded in a narrow azimuth range, illuminating the upper crustal P and S velocity structure in the region. Further, the seismogram characteristics of the local earthquake sources are found to vary significantly for different source mechanisms and the amplitude variations exceed those due to velocity model stratification. In the present study a large number of reflectivity synthetic seismograms are obtained in near offset ranges for a stratified upper crustal model having sharp discontinuities with 7%-10% velocity contrasts. The synthetics are obtained for different source regimes (e.g., strike-slip, normal, reverse) and different sets of source parameters (strike, dip, and rake) within each regime. Seismogram sections with dominantly strike-slip mechanism are found to be clearly favorable in revealing the velocity stratification for both P and S waves. In contrast the seismogram sections for earthquakes of other source mechanisms seem to display the upper crustal P phases poorly with low amplitudes even in presence of sharp discontinuities of high velocity contrasts. The observed seismogram sections illustrated here for the earthquake sources with strike-slip and normal mechanisms from the Koyna-Warna seismic region substantiate these findings. Travel times and reflectivity synthetic seismograms are used for 1-D modeling of the observed virtual source local earthquake seismogram sections and inferring the upper crustal velocity structure in the Koyna-Warna region. Significantly, the inferred upper crustal velocity model in the region reproduces the synthetic seismograms comparable to the observed sections for earthquake sources with differing mechanisms in the Koyna and Warna regions.</p>


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V41-V59 ◽  
Author(s):  
Olena Tiapkina ◽  
Martin Landrø ◽  
Yuriy Tyapkin ◽  
Brian Link

The advent of single receiver point, multi-component geophones has necessitated that ground roll be removed in the processing flow rather than through acquisition design. A wide class of processing methods for ground-roll elimination is polarization filtering. A number of these methods use singular value decomposition (SVD) or some related transformations. We focus on a single-station SVD-based polarization filter that we consider to be one of the best in the industry. The method is comprised of two stages: (1) ground-roll detection and (2) ground-roll estimation and filtering. To detect the ground roll, a special attribute dependent on the singular values of a three-column matrix formed by a sliding time window is used. The ground roll is approximated and subtracted using the first two eigenimages of this matrix. To limit the possible damage to the signal, the filter operates within the record intervals where the ground roll is detected and within the ground-roll frequency bandwidth only. We improve the ground-roll detector to make it theoretically insensitive to ambient noise and more sensitive to the presence of ground roll. The advantage of the new detector is demonstrated on synthetic and field data sets. We estimate theoretically and with synthetic data the attenuation of the underlying reflections that can be caused by the polarization filter. We show that the underlying signal always loses almost all the energy on the vertical component and on the horizontal component in the ground-roll propagation plane and within the ground-roll frequency bandwidth. The only signal component, if it exists, that can retain a significant part of its energy is the horizontal component orthogonal to the above plane. When 2D 3C field operations are conducted, the signal particle motion can deviate from the ground-roll propagation plane and can therefore retain some of its energy due to a set of offline reflections. In the case of 3D 3C seismic surveys, the reflected signal always deviates from the ground-roll propagation plane on the receiver lines that do not contain the source. This is confirmed with a 2.5D 3C synthetic data set. We discuss when the ability of the filter to effectively subtract the ground roll may, or may not, allow us to ignore the inevitable harm that is done to the underlying reflected waves.


Geophysics ◽  
2021 ◽  
pp. 1-35
Author(s):  
M. Javad Khoshnavaz

Building an accurate velocity model plays a vital role in routine seismic imaging workflows. Normal-moveout-based seismic velocity analysis is a popular method to make the velocity models. However, traditional velocity analysis methodologies are not generally capable of handling amplitude variations across moveout curves, specifically polarity reversals caused by amplitude-versus-offset anomalies. I present a normal-moveout-based velocity analysis approach that circumvents this shortcoming by modifying the conventional semblance function to include polarity and amplitude correction terms computed using correlation coefficients of seismic traces in the velocity analysis scanning window with a reference trace. Thus, the proposed workflow is suitable for any class of amplitude-versus-offset effects. The approach is demonstrated to four synthetic data examples of different conditions and a field data consisting a common-midpoint gather. Lateral resolution enhancement using the proposed workflow is evaluated by comparison between the results from the workflow and the results obtained by the application of conventional semblance and three semblance-based velocity analysis algorithms developed to circumvent the challenges associated with amplitude variations across moveout curves, caused by seismic attenuation and class II amplitude-versus-offset anomalies. According to the obtained results, the proposed workflow is superior to all the presented workflows in handling such anomalies.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. KS59-KS69 ◽  
Author(s):  
Chao Song ◽  
Zedong Wu ◽  
Tariq Alkhalifah

Passive seismic monitoring has become an effective method to understand underground processes. Time-reversal-based methods are often used to locate passive seismic events directly. However, these kinds of methods are strongly dependent on the accuracy of the velocity model. Full-waveform inversion (FWI) has been used on passive seismic data to invert the velocity model and source image, simultaneously. However, waveform inversion of passive seismic data uses mainly the transmission energy, which results in poor illumination and low resolution. We developed a waveform inversion using multiscattered energy for passive seismic to extract more information from the data than conventional FWI. Using transmission wavepath information from single- and double-scattering, computed from a predicted scatterer field acting as secondary sources, our method provides better illumination of the velocity model than conventional FWI. Using a new objective function, we optimized the source image and velocity model, including multiscattered energy, simultaneously. Because we conducted our method in the frequency domain with a complex source function including spatial and wavelet information, we mitigate the uncertainties of the source wavelet and source origin time. Inversion results from the Marmousi model indicate that by taking advantage of multiscattered energy and starting from a reasonably acceptable frequency (a single source at 3 Hz and multiple sources at 5 Hz), our method yields better inverted velocity models and source images compared with conventional FWI.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. R121-R131 ◽  
Author(s):  
Hu Jin ◽  
George A. McMechan

A 2D velocity model was estimated by tomographic imaging of overlapping focusing operators that contain one-way traveltimes, from common-focus points to receivers in an aperture along the earth’s surface. The stability and efficiency of convergence and the quality of the resulting models were improved by a sequence of ideas. We used a hybrid parameterization that has an underlying grid, upon which is superimposed a flexible, pseudolayer model. We first solved for the low-wavenumber parts of the model (approximating it as constant-velocity pseudo layers), then we allowed intermediate wavenumbers (allowing the layers to have linear velocity gradients), and finally did unconstrained iterations to add the highest wavenumber details. Layer boundaries were implicitly defined by focus points that align along virtual marker (reflector) horizons. Each focus point sampled an area bounded by the first and last rays in the data aperture at the surface; this reduced the amount of computation and the size of the effective null space of the solution. Model updates were performed simultaneously for the velocities and the local focus point positions in two steps; local estimates were performed independently by amplitude semblance for each focusing operator within its area of dependence, followed by a tomographic weighting of the local estimates into a global solution for each grid point, subject to the constraints of the parameterization used at that iteration. The system of tomographic equations was solved by simultaneous iterative reconstruction, which is equivalent to a least-squares solution, but it does not involve a matrix inversion. The algorithm was successfully applied to synthetic data for a salt dome model using a constant-velocity starting model; after a total of 25 iterations, the velocity error was [Formula: see text] and the final mean focal point position error was [Formula: see text] wavelength.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Sign in / Sign up

Export Citation Format

Share Document