Estimating frequency-dependent attenuation quality factor values from prestack surface seismic data

Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. O11-O22 ◽  
Author(s):  
James Beckwith ◽  
Roger Clark ◽  
Linda Hodgson

The intrinsic seismic quality factor [Formula: see text] is known from poroelastic rock-physics theory to be frequency dependent, even within typical bandwidths of individual surface- and borehole-based surveys in which measurement methods usually deliver frequency-independent [Formula: see text]. Thus, measuring frequency-dependent [Formula: see text] instead offers better characterization of seismic properties and moreover a potential step toward estimating permeability directly from seismic data. Therefore, we have introduced a method to measure frequency-dependent [Formula: see text] from pairs of reflections in prestack [Formula: see text]-[Formula: see text] domain surface seismic data — a data type that, unlike a vertical seismic profile, offers useful areal coverage. Although, in principle, any analytic form with a manageable number of parameters could be prescribed, the frequency dependence of [Formula: see text] is modeled as a power law, [Formula: see text]. Inversion is done with a simple grid search over coefficient ([Formula: see text]) and exponent [Formula: see text], seeking a minimum [Formula: see text]-norm. We have found, using a numerical experiment and a synthetic data set, that it is robust and also accurate down to a signal-to-noise ratio of approximately 0.65. Then, [Formula: see text] is estimated for some 955 [Formula: see text] superbins of a 3D prestack ocean bottom cable data set over the Kinnoull field, central North Sea. All combinations of eight prominent reflections between Top Miocene and Base Cretaceous were treated together to give some 21,000 frequency-dependent and (for comparison) constant-[Formula: see text] results. The median coefficient ([Formula: see text]) and exponent [Formula: see text] were 0.0074 and 0.06, respectively, with sharply peaked distributions (excess kurtosis [Formula: see text]). Outlier, strongly frequency-dependent results, given by [Formula: see text], coincide with low-frequency “shadows” under amplitude anomalies, adversely affecting the spectra of reflections. The inferred frequency-dependent [Formula: see text] at 32.5 Hz, the center of the available bandwidth, is not statistically different from the frequency-independent [Formula: see text], 181 with a standard error from the distribution of one, derived from the same data. Hence for this data set, a constant-[Formula: see text] assumption would in fact be adequate. However, our method has the ability to measure stable estimates of [Formula: see text].

Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. D27-D36 ◽  
Author(s):  
Andrey Bakulin ◽  
Marta Woodward ◽  
Dave Nichols ◽  
Konstantin Osypov ◽  
Olga Zdraveva

Tilted transverse isotropy (TTI) is increasingly recognized as a more geologically plausible description of anisotropy in sedimentary formations than vertical transverse isotropy (VTI). Although model-building approaches for VTI media are well understood, similar approaches for TTI media are in their infancy, even when the symmetry-axis direction is assumed known. We describe a tomographic approach that builds localized anisotropic models by jointly inverting surface-seismic and well data. We present a synthetic data example of anisotropic tomography applied to a layered TTI model with a symmetry-axis tilt of 45 degrees. We demonstrate three scenarios for constraining the solution. In the first scenario, velocity along the symmetry axis is known and tomography inverts for Thomsen’s [Formula: see text] and [Formula: see text] parame-ters. In the second scenario, tomography inverts for [Formula: see text], [Formula: see text], and velocity, using surface-seismic data and vertical check-shot traveltimes. In contrast to the VTI case, both these inversions are nonunique. To combat nonuniqueness, in the third scenario, we supplement check-shot and seismic data with the [Formula: see text] profile from an offset well. This allows recovery of the correct profiles for velocity along the symmetry axis and [Formula: see text]. We conclude that TTI is more ambiguous than VTI for model building. Additional well data or rock-physics assumptions may be required to constrain the tomography and arrive at geologically plausible TTI models. Furthermore, we demonstrate that VTI models with atypical Thomsen parameters can also fit the same joint seismic and check-shot data set. In this case, although imaging with VTI models can focus the TTI data and match vertical event depths, it leads to substantial lateral mispositioning of the reflections.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. V79-V86 ◽  
Author(s):  
Kurang Mehta ◽  
Andrey Bakulin ◽  
Jonathan Sheiman ◽  
Rodney Calvert ◽  
Roel Snieder

The virtual source method has recently been proposed to image and monitor below complex and time-varying overburden. The method requires surface shooting recorded at downhole receivers placed below the distorting or changing part of the overburden. Redatuming with the measured Green’s function allows the reconstruction of a complete downhole survey as if the sources were also buried at the receiver locations. There are still some challenges that need to be addressed in the virtual source method, such as limited acquisition aperture and energy coming from the overburden. We demonstrate that up-down wavefield separation can substantially improve the quality of virtual source data. First, it allows us to eliminate artifacts associated with the limited acquisition aperture typically used in practice. Second, it allows us to reconstruct a new optimized response in the absence of downgoing reflections and multiples from the overburden. These improvements are illustrated on a synthetic data set of a complex layered model modeled after the Fahud field in Oman, and on ocean-bottom seismic data acquired in the Mars field in the deepwater Gulf of Mexico.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Author(s):  
A. Ogbamikhumi ◽  
T. Tralagba ◽  
E. E. Osagiede

Field ‘K’ is a mature field in the coastal swamp onshore Niger delta, which has been producing since 1960. As a huge producing field with some potential for further sustainable production, field monitoring is therefore important in the identification of areas of unproduced hydrocarbon. This can be achieved by comparing production data with the corresponding changes in acoustic impedance observed in the maps generated from base survey (initial 3D seismic) and monitor seismic survey (4D seismic) across the field. This will enable the 4D seismic data set to be used for mapping reservoir details such as advancing water front and un-swept zones. The availability of good quality onshore time-lapse seismic data for Field ‘K’ acquired in 1987 and 2002 provided the opportunity to evaluate the effect of changes in reservoir fluid saturations on time-lapse amplitudes. Rock physics modelling and fluid substitution studies on well logs were carried out, and acoustic impedance change in the reservoir was estimated to be in the range of 0.25% to about 8%. Changes in reservoir fluid saturations were confirmed with time-lapse amplitudes within the crest area of the reservoir structure where reservoir porosity is 0.25%. In this paper, we demonstrated the use of repeat Seismic to delineate swept zones and areas hit with water override in a producing onshore reservoir.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. M41-M48 ◽  
Author(s):  
Hongwei Liu ◽  
Mustafa Naser Al-Ali

The ideal approach for continuous reservoir monitoring allows generation of fast and accurate images to cope with the massive data sets acquired for such a task. Conventionally, rigorous depth-oriented velocity-estimation methods are performed to produce sufficiently accurate velocity models. Unlike the traditional way, the target-oriented imaging technology based on the common-focus point (CFP) theory can be an alternative for continuous reservoir monitoring. The solution is based on a robust data-driven iterative operator updating strategy without deriving a detailed velocity model. The same focusing operator is applied on successive 3D seismic data sets for the first time to generate efficient and accurate 4D target-oriented seismic stacked images from time-lapse field seismic data sets acquired in a [Formula: see text] injection project in Saudi Arabia. Using the focusing operator, target-oriented prestack angle domain common-image gathers (ADCIGs) could be derived to perform amplitude-versus-angle analysis. To preserve the amplitude information in the ADCIGs, an amplitude-balancing factor is applied by embedding a synthetic data set using the real acquisition geometry to remove the geometry imprint artifact. Applying the CFP-based target-oriented imaging to time-lapse data sets revealed changes at the reservoir level in the poststack and prestack time-lapse signals, which is consistent with the [Formula: see text] injection history and rock physics.


2021 ◽  
Vol 40 (10) ◽  
pp. 751-758
Author(s):  
Fabien Allo ◽  
Jean-Philippe Coulon ◽  
Jean-Luc Formento ◽  
Romain Reboul ◽  
Laure Capar ◽  
...  

Deep neural networks (DNNs) have the potential to streamline the integration of seismic data for reservoir characterization by providing estimates of rock properties that are directly interpretable by geologists and reservoir engineers instead of elastic attributes like most standard seismic inversion methods. However, they have yet to be applied widely in the energy industry because training DNNs requires a large amount of labeled data that is rarely available. Training set augmentation, routinely used in other scientific fields such as image recognition, can address this issue and open the door to DNNs for geophysical applications. Although this approach has been explored in the past, creating realistic synthetic well and seismic data representative of the variable geology of a reservoir remains challenging. Recently introduced theory-guided techniques can help achieve this goal. A key step in these hybrid techniques is the use of theoretical rock-physics models to derive elastic pseudologs from variations of existing petrophysical logs. Rock-physics theories are already commonly relied on to generalize and extrapolate the relationship between rock and elastic properties. Therefore, they are a useful tool to generate a large catalog of alternative pseudologs representing realistic geologic variations away from the existing well locations. While not directly driven by rock physics, neural networks trained on such synthetic catalogs extract the intrinsic rock-physics relationships and are therefore capable of directly estimating rock properties from seismic amplitudes. Neural networks trained on purely synthetic data are applied to a set of 2D poststack seismic lines to characterize a geothermal reservoir located in the Dogger Formation northeast of Paris, France. The goal of the study is to determine the extent of porous and permeable layers encountered at existing geothermal wells and ultimately guide the location and design of future geothermal wells in the area.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. MR187-MR198 ◽  
Author(s):  
Yi Shen ◽  
Jack Dvorkin ◽  
Yunyue Li

Our goal is to accurately estimate attenuation from seismic data using model regularization in the seismic inversion workflow. One way to achieve this goal is by finding an analytical relation linking [Formula: see text] to [Formula: see text]. We derive an approximate closed-form solution relating [Formula: see text] to [Formula: see text] using rock-physics modeling. This relation is tested on well data from a clean clastic gas reservoir, of which the [Formula: see text] values are computed from the log data. Next, we create a 2D synthetic gas-reservoir section populated with [Formula: see text] and [Formula: see text] and generate respective synthetic seismograms. Now, the goal is to invert this synthetic seismic section for [Formula: see text]. If we use standard seismic inversion based solely on seismic data, the inverted attenuation model has low resolution and incorrect positioning, and it is distorted. However, adding our relation between velocity and attenuation, we obtain an attenuation model very close to the original section. This method is tested on a 2D field seismic data set from Gulf of Mexico. The resulting [Formula: see text] model matches the geologic shape of an absorption body interpreted from the seismic section. Using this [Formula: see text] model in seismic migration, we make the seismic events below the high-absorption layer clearly visible, with improved frequency content and coherency of the events.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. MR13-MR23 ◽  
Author(s):  
Stefano Picotti ◽  
José M. Carcione ◽  
Jing Ba

We build rock-physics templates (RPTs) for reservoir rocks based on seismic quality factors. In these templates, the effects of partial saturation, porosity, and permeability on the seismic properties are described by generalizing the Johnson mesoscopic-loss model to a distribution of gas-patch sizes in brine- and oil-saturated rocks. This model addresses the wave-induced fluid flow attenuation mechanism, by which part of the energy of the fast P-wave is converted into the slow P (Biot) diffusive mode. We consider patch sizes, whose probability density function is defined by a normal (Gaussian) distribution. The complex bulk modulus of the composite medium is obtained with the Voigt-Reuss-Hill average, and we show that the results are close to those obtained with the Hashin-Shtrikman average. The templates represent the seismic dissipation factor (reciprocal of seismic quality factor) as a function of the P-wave velocity, acoustic impedance, and [Formula: see text] (P to S velocity ratio), for isolines of saturation, porosity, and permeability. They differentiate between oil and brine on the basis of the quality factor, with the gas-brine case showing more dissipation than the gas-oil case. We obtain sensitivity maps of the seismic properties to gas saturation and porosity for brine and oil. Unlike the gas-brine case, which shows higher sensitivity of attenuation to gas saturation, the gas-oil case shows higher sensitivity to porosity, and higher acoustic impedance and [Formula: see text] sensitivity values versus saturation. The RPTs can be used for a robust sensitivity analysis, which provides insights on seismic attributes for hydrocarbon detection and reservoir delineation. The templates are also relevant for studies related to [Formula: see text]-storage monitoring.


Sign in / Sign up

Export Citation Format

Share Document