Fully automated near-surface analysis by surface-consistent refraction method

Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U39-U49 ◽  
Author(s):  
Daniele Colombo ◽  
Federico Miorelli ◽  
Ernesto Sandoval ◽  
Kevin Erickson

Industry practices for near-surface analysis indicate difficulties in coping with the increased number of channels in seismic acquisition systems, and new approaches are needed to fully exploit the resolution embedded in modern seismic data sets. To achieve this goal, we have developed a novel surface-consistent refraction analysis method for low-relief geology to automatically derive near-surface corrections for seismic data processing. The method uses concepts from surface-consistent analysis applied to refracted arrivals. The key aspects of the method consist of the use of common midpoint (CMP)-offset-azimuth binning, evaluation of mean traveltime and standard deviation for each bin, rejection of anomalous first-break (FB) picks, derivation of CMP-based traveltime-offset functions, conversion to velocity-depth functions, evaluation of long-wavelength statics, and calculation of surface-consistent residual statics through waveform crosscorrelation. Residual time lags are evaluated in multiple CMP-offset-azimuth bins by crosscorrelating a pilot trace with all the other traces in the gather in which the correlation window is centered at the refracted arrival. The residuals are then used to build a system of linear equations that is simultaneously inverted for surface-consistent shot and receiver time shift corrections plus a possible subsurface residual term. All the steps are completely automated and require a fraction of the time needed for conventional near-surface analysis. The developed methodology was successfully performed on a complex 3D land data set from Central Saudi Arabia where it was benchmarked against a conventional tomographic work flow. The results indicate that the new surface-consistent refraction statics method enhances seismic imaging especially in portions of the survey dominated by noise.

2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M1-M13 ◽  
Author(s):  
Yichuan Wang ◽  
Igor B. Morozov

For seismic monitoring injected fluids during enhanced oil recovery or geologic [Formula: see text] sequestration, it is useful to measure time-lapse (TL) variations of acoustic impedance (AI). AI gives direct connections to the mechanical and fluid-related properties of the reservoir or [Formula: see text] storage site; however, evaluation of its subtle TL variations is complicated by the low-frequency and scaling uncertainties of this attribute. We have developed three enhancements of TL AI analysis to resolve these issues. First, following waveform calibration (cross-equalization) of the monitor seismic data sets to the baseline one, the reflectivity difference was evaluated from the attributes measured during the calibration. Second, a robust approach to AI inversion was applied to the baseline data set, based on calibration of the records by using the well-log data and spatially variant stacking and interval velocities derived during seismic data processing. This inversion method is straightforward and does not require subjective selections of parameterization and regularization schemes. Unlike joint or statistical inverse approaches, this method does not require prior models and produces accurate fitting of the observed reflectivity. Third, the TL AI difference is obtained directly from the baseline AI and reflectivity difference but without the uncertainty-prone subtraction of AI volumes from different seismic vintages. The above approaches are applied to TL data sets from the Weyburn [Formula: see text] sequestration project in southern Saskatchewan, Canada. High-quality baseline and TL AI-difference volumes are obtained. TL variations within the reservoir zone are observed in the calibration time-shift, reflectivity-difference, and AI-difference images, which are interpreted as being related to the [Formula: see text] injection.


Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. B243-B252 ◽  
Author(s):  
Peter Bergmann ◽  
Artem Kashubin ◽  
Monika Ivandic ◽  
Stefan Lüth ◽  
Christopher Juhlin

A method for static correction of time-lapse differences in reflection arrival times of time-lapse prestack seismic data is presented. These arrival-time differences are typically caused by changes in the near-surface velocities between the acquisitions and had a detrimental impact on time-lapse seismic imaging. Trace-to-trace time shifts of the data sets from different vintages are determined by crosscorrelations. The time shifts are decomposed in a surface-consistent manner, which yields static corrections that tie the repeat data to the baseline data. Hence, this approach implies that new refraction static corrections for the repeat data sets are unnecessary. The approach is demonstrated on a 4D seismic data set from the Ketzin [Formula: see text] pilot storage site, Germany, and is compared with the result of an initial processing that was based on separate refraction static corrections. It is shown that the time-lapse difference static correction approach reduces 4D noise more effectively than separate refraction static corrections and is significantly less labor intensive.


2021 ◽  
Author(s):  
Daniel Blank ◽  
Annette Eicker ◽  
Laura Jensen ◽  
Andreas Güntner

<p>Information on water storage changes in the soil can be obtained on a global scale from different types of satellite observations. While active or passive microwave remote sensing is limited to investigating the upper few centimeters of the soil, satellite gravimetry is sensitive to variations in the full column of terrestrial water storage (TWS) but cannot distinguish between storage variations occurring in different soil depths. Jointly analyzing both data types promises interesting insights into the underlying hydrological dynamics and may enable a better process understanding of water storage change in the subsurface.</p><p>In this study, we aim at investigating the global relationship of (1) several satellite soil moisture (SM) products and (2) non-standard daily TWS data from the GRACE and GRACE-FO satellite gravimetry missions on different time scales. We decompose the data sets into different temporal frequencies from seasonal to sub-monthly signals and carry out the comparison with respect to spatial patterns and temporal variability. Level-3 (Surface SM up to 5 cm depth) and Level-4 (Root-Zone SM up to 1 m depth) data sets of the SMOS and SMAP missions as well as the ESA CCI data set are used in this investigation.<br>Since a direct comparison of the absolute values is not possible due to the different integration depths of the two data sets (SM and TWS), we will analyze their relationship using Pearson’s pairwise correlation coefficient. Furthermore, a time-shift analysis is carried out by means of cross-correlation to identify time lags between SM and TWS data sets that indicate differences in the temporal dynamics of SM storage change in varying depth layers.</p>


2021 ◽  
Vol 13 (3) ◽  
pp. 530
Author(s):  
Junjun Yin ◽  
Jian Yang

Pseudo quad polarimetric (quad-pol) image reconstruction from the hybrid dual-pol (or compact polarimetric (CP)) synthetic aperture radar (SAR) imagery is a category of important techniques for radar polarimetric applications. There are three key aspects concerned in the literature for the reconstruction methods, i.e., the scattering symmetric assumption, the reconstruction model, and the solving approach of the unknowns. Since CP measurements depend on the CP mode configurations, different reconstruction procedures were designed when the transmit wave varies, which means the reconstruction procedures were not unified. In this study, we propose a unified reconstruction framework for the general CP mode, which is applicable to the mode with an arbitrary transmitted ellipse wave. The unified reconstruction procedure is based on the formalized CP descriptors. The general CP symmetric scattering model-based three-component decomposition method is also employed to fit the reconstruction model parameter. Finally, a least squares (LS) estimation method, which was proposed for the linear π/4 CP data, is extended for the arbitrary CP mode to estimate the solution of the system of non-linear equations. Validation is carried out based on polarimetric data sets from both RADARSAT-2 (C-band) and ALOS-2/PALSAR (L-band), to compare the performances of reconstruction models, methods, and CP modes.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. M41-M48 ◽  
Author(s):  
Hongwei Liu ◽  
Mustafa Naser Al-Ali

The ideal approach for continuous reservoir monitoring allows generation of fast and accurate images to cope with the massive data sets acquired for such a task. Conventionally, rigorous depth-oriented velocity-estimation methods are performed to produce sufficiently accurate velocity models. Unlike the traditional way, the target-oriented imaging technology based on the common-focus point (CFP) theory can be an alternative for continuous reservoir monitoring. The solution is based on a robust data-driven iterative operator updating strategy without deriving a detailed velocity model. The same focusing operator is applied on successive 3D seismic data sets for the first time to generate efficient and accurate 4D target-oriented seismic stacked images from time-lapse field seismic data sets acquired in a [Formula: see text] injection project in Saudi Arabia. Using the focusing operator, target-oriented prestack angle domain common-image gathers (ADCIGs) could be derived to perform amplitude-versus-angle analysis. To preserve the amplitude information in the ADCIGs, an amplitude-balancing factor is applied by embedding a synthetic data set using the real acquisition geometry to remove the geometry imprint artifact. Applying the CFP-based target-oriented imaging to time-lapse data sets revealed changes at the reservoir level in the poststack and prestack time-lapse signals, which is consistent with the [Formula: see text] injection history and rock physics.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


Sign in / Sign up

Export Citation Format

Share Document