scholarly journals Lithospheric structure models applied for locating the Romanian seismic events

1994 ◽  
Vol 37 (3) ◽  
Author(s):  
M. Rizescu ◽  
E. Popescu ◽  
V. Oancea ◽  
D. Enescu

The paper presents our attempts made for improving the locations obtained for local seismic events, using refined lithospheric structure models. The location program (based on Geiger method) supposes a known model. The program is run for some seismic sequences which occurred in different regions, on the Romanian territory, using for each of the sequences three velocity models: 1) 7 layers of constant velocity of seismic waves, as an average structure of the lithosphere for the whole territory; 2) site dependent structure (below each station), based on geophysical and geological information on the crust; 3) curves deseribing the dependence of propagation velocities with depth in the lithosphere, characterizing the 7 structural units delineated on the Romanian territory. The results obtained using the different velocity models are compared. Station corrections are computed for each data set. Finally, the locations determined for some quarry blasts are compared with the real ones.

2018 ◽  
Vol 40 (3) ◽  
pp. 1091
Author(s):  
Ch. K. Karamanos ◽  
G. V. Karakostas ◽  
E. E. Papadimitriou ◽  
M. Sachpazi

The area of North Aegean Trough exhibits complex tectonic characteristics as a consequence of the presence of complicated active structures. Exploitation of accurately determined earthquake data considerably contributes in the investigation of these structures and such accuracy is seeking through certain procedures. The determination of focal parameters of earthquakes that occurred in this area during 1964-2003 was performed by collecting all the available data for Ρ and S arrivals. After selecting the best solutions from an initial hypocentral location, 739 earthquakes were found that fulfilled certain criteria for the accuracy and used for further processing. The study area was divided in 16 sub regions and by the use of the HYPOINVERSE computer program, the travel time curves were constructed, and were used to define the velocity models for each one of them. For each sub region the time delays were calculated and used as time corrections in the arrival times of the seismic waves. The Vp/Vs ratio, necessary for S—wave velocity models, was calculated with two different methods and was found equal to 1.76. The velocity models and the time delays were used to relocate the events of the whole data set. The relocation resulted in significant improvement of the accuracy in the focal parameters determination.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. R1-R10 ◽  
Author(s):  
Zhendong Zhang ◽  
Tariq Alkhalifah ◽  
Zedong Wu ◽  
Yike Liu ◽  
Bin He ◽  
...  

Full-waveform inversion (FWI) is an attractive technique due to its ability to build high-resolution velocity models. Conventional amplitude-matching FWI approaches remain challenging because the simplified computational physics used does not fully represent all wave phenomena in the earth. Because the earth is attenuating, a sample-by-sample fitting of the amplitude may not be feasible in practice. We have developed a normalized nonzero-lag crosscorrelataion-based elastic FWI algorithm to maximize the similarity of the calculated and observed data. We use the first-order elastic-wave equation to simulate the propagation of seismic waves in the earth. Our proposed objective function emphasizes the matching of the phases of the events in the calculated and observed data, and thus, it is more immune to inaccuracies in the initial model and the difference between the true and modeled physics. The normalization term can compensate the energy loss in the far offsets because of geometric spreading and avoid a bias in estimation toward extreme values in the observed data. We develop a polynomial-type weighting function and evaluate an approach to determine the optimal time lag. We use a synthetic elastic Marmousi model and the BigSky field data set to verify the effectiveness of the proposed method. To suppress the short-wavelength artifacts in the estimated S-wave velocity and noise in the field data, we apply a Laplacian regularization and a total variation constraint on the synthetic and field data examples, respectively.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. M41-M48 ◽  
Author(s):  
Hongwei Liu ◽  
Mustafa Naser Al-Ali

The ideal approach for continuous reservoir monitoring allows generation of fast and accurate images to cope with the massive data sets acquired for such a task. Conventionally, rigorous depth-oriented velocity-estimation methods are performed to produce sufficiently accurate velocity models. Unlike the traditional way, the target-oriented imaging technology based on the common-focus point (CFP) theory can be an alternative for continuous reservoir monitoring. The solution is based on a robust data-driven iterative operator updating strategy without deriving a detailed velocity model. The same focusing operator is applied on successive 3D seismic data sets for the first time to generate efficient and accurate 4D target-oriented seismic stacked images from time-lapse field seismic data sets acquired in a [Formula: see text] injection project in Saudi Arabia. Using the focusing operator, target-oriented prestack angle domain common-image gathers (ADCIGs) could be derived to perform amplitude-versus-angle analysis. To preserve the amplitude information in the ADCIGs, an amplitude-balancing factor is applied by embedding a synthetic data set using the real acquisition geometry to remove the geometry imprint artifact. Applying the CFP-based target-oriented imaging to time-lapse data sets revealed changes at the reservoir level in the poststack and prestack time-lapse signals, which is consistent with the [Formula: see text] injection history and rock physics.


Author(s):  
Robert B. Herrmann ◽  
Charles J. Ammon ◽  
Harley M. Benz ◽  
Asiye Aziz-Zanjani ◽  
Joshua Boschelli

Abstract The variation of phase and group velocity dispersion of Love and Rayleigh waves was determined for the continental United States and adjacent Canada. By processing ambient noise from the broadband channels of the Transportable Array (TA) of USArray and several Program for the Array Seismic Studies of the Continental Lithosphere experiments and using some earthquake recordings, the effort was focused on determining dispersion down to periods as short as 2 s. The relatively short distances between TA stations permitted the use of a 25  km×25  km grid for the four independent tomographic inversions (Love and Rayleigh and phase and group velocity). One reason for trying to obtain short-period dispersion was to have a data set capable of constraining upper crust velocity models for use in determining regional moment tensors. The benefit of focusing on short-period dispersion is apparent in the tomography maps—shallow geologic structures such as the Mid-Continent Rift, and the Michigan, Illinois, Anadarko, Arkoma, and Appalachian basins are imaged. In our processing, we noted that the phase velocities were more robustly determined than the group velocities. We also noted that the inability to obtain dispersion at short periods shows distinct regional patterns that may be related to the local upper crust structure.


Geophysics ◽  
1994 ◽  
Vol 59 (4) ◽  
pp. 577-590 ◽  
Author(s):  
Side Jin ◽  
Raul Madariaga

Seismic reflection data contain information on small‐scale impedance variations and a smooth reference velocity model. Given a reference velocity model, the reflectors can be obtained by linearized migration‐inversion. If the reference velocity is incorrect, the reflectors obtained by inverting different subsets of the data will be incoherent. We propose to use the coherency of these images to invert for the background velocity distribution. We have developed a two‐step iterative inversion method in which we separate the retrieval of small‐scale variations of the seismic velocity from the longer‐period reference velocity model. Given an initial background velocity model, we use a waveform misfit‐functional for the inversion of small‐scale velocity variations. For this linear step we use the linearized migration‐inversion method based on ray theory that we have recently developed with Lambaré and Virieux. The reference velocity model is then updated by a Monte Carlo inversion method. For the nonlinear inversion of the velocity background, we introduce an objective functional that measures the coherency of the short wavelength components obtained by inverting different common shot gathers at the same locations. The nonlinear functional is calculated directly in migrated data space to avoid expensive numerical forward modeling by finite differences or ray theory. Our method is somewhat similar to an iterative migration velocity analysis, but we do an automatic search for relatively large‐scale 1-D reference velocity models. We apply the nonlinear inversion method to a marine data set from the North Sea and also show that nonlinear inversion can be applied to realistic scale data sets to obtain a laterally heterogeneous velocity model with a reasonable amount of computer time.


2020 ◽  
Vol 8 (4) ◽  
pp. SQ39-SQ45
Author(s):  
Rahul Gogia ◽  
Raman Singh ◽  
Paul de Groot ◽  
Harshit Gupta ◽  
Seshan Srirangarajan ◽  
...  

We have developed a new algorithm for tracking 3D seismic horizons. The algorithm combines an inversion-based, seismic-dip flattening technique with conventional, similarity-based autotracking. The inversion part of the algorithm aims to minimize the error between horizon dips and computed seismic dips. After each cycle in the inversion loop, more seeds are added to the horizon by the similarity-based autotracker. In the example data set, the algorithm is first used to quickly track a set of framework horizons, each guided by a small set of user-picked seed positions. Next, the intervals bounded by the framework horizons are infilled to generate a dense set of horizons, also known as HorizonCube. This is done under the supervision of a human interpreter in a similar manner. The results show that the algorithm behaves better than unconstrained flattening techniques in intervals with trackable events. Inversion-based algorithms generate continuous horizons with no holes to be filled posttracking with a gridding algorithm and no loop skips (jumping to the wrong event) that need to be edited as is standard practice with autotrackers. Because editing is a time-consuming process, creating horizons with inversion-based algorithms tends to be faster than conventional autotracking. Horizons created with the adopted algorithm follow seismic events more closely than horizons generated with the inversion-only algorithm, and the fault crossings are sharper.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. EN77-EN90 ◽  
Author(s):  
Paolo Bergamo ◽  
Laura Valentina Socco

Surface-wave (SW) techniques are mainly used to retrieve 1D velocity models and are therefore characterized by a 1D approach, which might prove unsatisfactory when relevant 2D effects are present in the investigated subsurface. In the case of sharp and sudden lateral heterogeneities in the subsurface, a strategy to tackle this limitation is to estimate the location of the discontinuities and to separately process seismic traces belonging to quasi-1D subsurface portions. We have addressed our attention to methods aimed at locating discontinuities by identifying anomalies in SW propagation and attenuation. The considered methods are the autospectrum computation and the attenuation analysis of Rayleigh waves (AARW). These methods were developed for purposes and/or scales of analysis that are different from those of this work, which aims at detecting and characterizing sharp subvertical discontinuities in the shallow subsurface. We applied both methods to two data sets, synthetic data from a finite-element method simulation and a field data set acquired over a fault system, both presenting an abrupt lateral variation perpendicularly crossing the acquisition line. We also extended the AARW method to the detection of sharp discontinuities from large and multifold data sets and we tested these novel procedures on the field case. The two methods are proven to be effective for the detection of the discontinuity, by portraying propagation phenomena linked to the presence of the heterogeneity, such as the interference between incident and reflected wavetrains, and energy concentration as well as subsequent decay at the fault location. The procedures we developed for the processing of multifold seismic data set showed to be reliable tools in locating and characterizing subvertical sharp heterogeneities.


2018 ◽  
Author(s):  
Floriane Provost ◽  
Jean-Philippe Malet ◽  
Clément Hibert ◽  
Agnès Helmstetter ◽  
Mathilde Radiguet ◽  
...  

Abstract. In the last decade, numerous studies focused on the analysis of seismic waves generated by Earth surface processes such as landslides. The installation of seismometers on unstable slopes revealed a variety of seismic signals suspected to be generated by slope deformation, weathering of the slope material or fluid circulation. A standard classification for seismic sources generated by unstable slopes needs to be proposed in order to compare the seismic activity of several unstable slopes and identify possible correlation of the seismic activity rate with triggering factors. The objective of this work is to discuss the typology and source mechanisms of seismic events detected at close distances (


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Sign in / Sign up

Export Citation Format

Share Document