Noise suppression of time-migrated gathers using prestack structure-oriented filtering

2016 ◽  
Vol 4 (2) ◽  
pp. SG19-SG29 ◽  
Author(s):  
Bo Zhang ◽  
Tengfei Lin ◽  
Shiguang Guo ◽  
Oswaldo E. Davogustto ◽  
Kurt J. Marfurt

Prestack seismic analysis provides information on rock properties, lithology, fluid content, and the orientation and intensity of anisotropy. However, such analysis demands high-quality seismic data. Unfortunately, noise is always present in seismic data even after careful processing. Noise in the prestack gathers may not only contaminate the seismic image, thereby lowering the quality of seismic interpretation, but it may also bias the seismic prestack inversion for rock properties, such as acoustic- and shear-impedance estimation. Common postmigration data conditioning includes running window median and Radon filters that are applied to the flattened common reflection point gathers. We have combined filters across the offset and azimuth with edge-preserving filters along the structure to construct a true “5D” filter that preserves amplitude, thereby preconditioning the data for subsequent quantitative analysis. We have evaluated our workflow by applying it to a prestack seismic volume acquired over the Fort Worth Basin, TX. The inverted results from the noise-suppressed prestack gathers are more laterally continuous and have higher correlation with well logs when compared with those inverted from conventional time-migrated gathers.

2021 ◽  
Author(s):  
Ramy Elasrag ◽  
Thuraya Al Ghafri ◽  
Faaeza Al Katheer ◽  
Yousuf Al-Aufi ◽  
Ivica Mihaljevic ◽  
...  

Abstract Acquiring surface seismic data can be challenging in areas of intense human activities, due to presence of infrastructures (roads, houses, rigs), often leaving large gaps in the fold of coverage that can span over several kilometers. Modern interpolation algorithms can interpolate up to a certain extent, but quality of reconstructed seismic data diminishes as the acquisition gap increases. This is where vintage seismic acquisition can aid processing and imaging, especially if previous acquisition did not face the same surface obstacles. In this paper we will present how the legacy seismic survey has helped to fill in the data gaps of the new acquisition and produced improved seismic image. The new acquisition survey is part of the Mega 3D onshore effort undertaken by ADNOC, characterized by dense shot and receiver spacing with focus on full azimuth and broadband. Due to surface infrastructures, data could not be completely acquired leaving sizable gap in the target area. However, a legacy seismic acquisition undertaken in 2014 had access to such gap zones, as infrastructures were not present at the time. Legacy seismic data has been previously processed and imaged, however simple post-imaging merge would not be adequate as two datasets were processed using different workflows and imaging was done using different velocity models. In order to synchronize the two datasets, we have processed them in parallel. Data matching and merging were done before regularization. It has been regularized to radial geometry using 5D Matching Pursuit with Fourier Interpolation (MPFI). This has provided 12 well sampled azimuth sectors that went through surface consistent processing, multiple attenuation, and residual noise attenuation. Near surface model was built using data-driven image-based static (DIBS) while reflection tomography was used to build the anisotropic velocity model. Imaging was done using Pre-Stack Kirchhoff Depth Migration. Processing legacy survey from the beginning has helped to improve signal to noise ratio which assisted with data merging to not degrade the quality of the end image. Building one near surface model allowed both datasets to match well in time domain. Bringing datasets to the same level was an important condition before matching and merging. Amplitude and phase analysis have shown that both surveys are aligned quite well with minimal difference. Only the portion of the legacy survey that covers the gap was used in the regularization, allowing MPFI to reconstruct missing data. Regularized data went through surface multiple attenuation and further noise attenuation as preconditioning for migration. Final image that is created using both datasets has allowed target to be imaged better.


2017 ◽  
Vol 5 (3) ◽  
pp. T279-T285 ◽  
Author(s):  
Parvaneh Karimi ◽  
Sergey Fomel ◽  
Rui Zhang

Integration of well-log data and seismic data to predict rock properties is an essential but challenging task in reservoir characterization. The standard methods commonly used to create subsurface model do not fully honor the importance of seismic reflectors and detailed structural information in guiding the spatial distribution of rock properties in the presence of complex structures, which can make these methods inaccurate. To overcome initial model accuracy limitations in structurally complex regimes, we have developed a method that uses the seismic image structures to accurately constrain the interpolation of well properties between well locations. A geologically consistent framework provides a more robust initial model that, when inverted with seismic data, delivers a highly detailed yet accurate subsurface model. An application to field data from the North Sea demonstrates the effectiveness of our method, which proves that incorporating the seismic structural framework when interpolating rock properties between wells culminates in the increased accuracy of the final inverted result compared with the standard inversion workflows.


2021 ◽  
pp. 1-59
Author(s):  
Marwa Hussein ◽  
Robert R. Stewart ◽  
Deborah Sacrey ◽  
David H. Johnston ◽  
Jonny Wu

Time-lapse (4D) seismic analysis plays a vital role in reservoir management and reservoir simulation model updates. However, 4D seismic data are subject to interference and tuning effects. Being able to resolve and monitor thin reservoirs of different quality can aid in optimizing infill drilling or locating bypassed hydrocarbons. Using 4D seismic data from the Maui field in the offshore Taranaki basin of New Zealand, we generate typical seismic attributes sensitive to reservoir thickness and rock properties. We find that spectral instantaneous attributes extracted from time-lapse seismic data illuminate more detailed reservoir features compared to those same attributes computed on broadband seismic data. We develop an unsupervised machine learning workflow that enables us to combine eight spectral instantaneous seismic attributes into single classification volumes for the baseline and monitor surveys using self-organizing maps (SOM). Changes in the SOM natural clusters between the baseline and monitor surveys suggest production-related changes that are caused primarily by water replacing gas as the reservoir is being swept under a strong water drive. The classification volumes also facilitate monitoring water saturation changes within thin reservoirs (ranging from very good to poor quality) as well as illuminating thin baffles. Thus, these SOM classification volumes show internal reservoir heterogeneity that can be incorporated into reservoir simulation models. Using meaningful SOM clusters, geobodies are generated for the baseline and monitor SOM classifications. The recoverable gas reserves for those geobodies are then computed and compared to production data. The SOM classifications of the Maui 4D seismic data seems to be sensitive to water saturation change and subtle pressure depletions due to gas production under a strong water drive.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. IM13-IM20 ◽  
Author(s):  
Xinming Wu ◽  
Guillaume Caumon

Well-seismic ties allow rock properties measured at well locations to be compared with seismic data and are therefore useful for seismic interpretation. Numerous methods have been proposed to compute well-seismic ties by correlating real seismograms with synthetic seismograms computed from velocity and density logs. However, most methods tie multiple wells to seismic data one by one; hence, they do not guarantee lateral consistency among multiple well ties. We therefore propose a method to simultaneously tie multiple wells to seismic data. In this method, we first flatten synthetic and corresponding real seismograms so that all seismic reflectors are horizontally aligned. By doing this, we turn multiple well-seismic tying into a 1D correlation problem. We then compute only vertically variant but laterally constant shifts to correlate these horizontally aligned (flattened) synthetic and real seismograms. This two-step correlation method maintains lateral consistency among multiple well ties by computing a laterally and vertically optimized correlation of all synthetic and real seismograms. We applied our method to a 3D real seismic image with multiple wells and obtained laterally consistent well-seismic ties.


2018 ◽  
Vol 6 (2) ◽  
pp. T349-T365 ◽  
Author(s):  
Xuan Qi ◽  
Kurt Marfurt

One of the key tasks of a seismic interpreter is to map lateral changes in surfaces, not only including faults, folds, and flexures, but also incisements, diapirism, and dissolution features. Volumetrically, coherence provides rapid visualization of faults and curvature provides rapid visualization of folds and flexures. Aberrancy measures the lateral change (or gradient) of curvature along a picked or inferred surface. Aberrancy complements curvature and coherence. In normally faulted terrains, the aberrancy anomaly will track the coherence anomaly and fall between the most positive curvature anomaly defining the footwall and the most negative curvature anomaly defining the hanging wall. Aberrancy can delineate faults whose throw falls below the seismic resolution or is distributed across a suite of smaller conjugate faults that do not exhibit a coherence anomaly. Previously limited to horizon computations, we extend aberrancy to uninterpreted seismic data volumes. We apply our volumetric aberrancy calculation to a data volume acquired over the Barnett Shale gas reservoir of the Fort Worth Basin, Texas. In this area, the Barnett Shale is bound on the top by the Marble Falls Limestone and on the bottom by the Ellenburger Dolomite. Basement faulting controls karstification in the Ellenburger, resulting in the well-known “string of pearls” pattern seen on coherence images. Aberrancy delineates small karst features, which are, in many places, too smoothly varying to be detected by coherence. Equally important, aberrancy provides the azimuthal orientation of the fault and flexure anomalies.


2019 ◽  
Vol 38 (4) ◽  
pp. 292-297
Author(s):  
Harshvardhan ◽  
Madhubanti Bose ◽  
Soumya Chakraborty ◽  
Monosvita Chaliha ◽  
Sujoy Mukherjee ◽  
...  

The offshore Lakshmi Field, located in the southern part of Cambay Basin in the CB/OS-2 block in western India, is a success story of transitioning from a plateauing gas field to an oil-producing asset. Hydrocarbons sourced from Hazira Shale at Lakshmi Field are trapped in inversion-related four-way dip closures. The reservoirs are within the Miocene-age Tarkeshwar and overlying Babaguru formations; both have excellent reservoir quality with porosity ranging from 25% to 30% and Darcy-scale permeability. During the initial exploration stage, presence of oil was established in the deeper lower Tarkeshwar sands. However, the full oil potential of the field was not realized in the early stage of development due to difficulty in characterizing laterally and vertically discontinuous thin sands. These sands are often not detected in conventional 3D seismic data due to severe attenuation-related masking by overlying thick gas sands. Given the uncertainty of the oil potential of the field, a revised look using advanced seismic technologies was undertaken to drive a new infill drilling campaign. Seismic reprocessing using Q-tomographic inversion was performed to address the gas-attenuation problem, resulting in an improved seismic image. The new seismic data enabled characterization of thin reservoir sands using analytical techniques such as spectral decomposition and amplitude-variation-with-offset (AVO) attribute analysis. An integrated reservoir-characterization approach using field analogs, interpretation within a sequence stratigraphic framework, concepts of seismic geomorphology, and quantitative seismic analysis helped identify optimal areas of the sand fairway for infill well placement. Several infill wells have since been drilled that encountered good pay zones, validating the integrated approach. This successful campaign doubled oil production at Lakshmi and extended the field life by increasing reserves.


2021 ◽  
pp. 1-52
Author(s):  
Hongliu Zeng ◽  
Yawen He ◽  
Leo Zeng

We have developed a new machine-learning (ML) workflow that uses random forest (RF) regression to predict sedimentary-rock properties from stacked and migrated 3D seismic data. The training, validation, and testing are performed with 40 features extracted from a geologically realistic 46 × 66-trace model built in the Miocene Powderhorn Field in South Texas. We focus on the responses of the RF model to sedimentary facies and the strategies adopted to achieve better prediction with various data conditions. We apply explained variation (R2) and root-mean-square (rms) prediction errors to map the relationship between the quality of prediction and the sedimentary facies. In the single-well model, R2 and rms error maps highly resemble sand-percentage maps, or lithofacies maps, showing the facies control on the quality of the ML model. We observe that training with a small well data set (1–10 wells) leads to low and unstable test scores (R2 = 0.2–0.7). The R2 score increases and stabilizes with more (as many as 1000) training wells (R2 = 0.7–0.9), realizing the effective correction of facies bias. The stratigraphic and spatial features are useful and should be used. Weak to moderate random noise (−20 to −15 dB) slightly lowers the training score (R2 < 0.05) and should not be a major concern. Sparse well-supported models can outperform linear regression and model-based inversion and can be useful if caution is exercised. In the best-case scenario (500 wells), the predicted model largely duplicates the true model with a significant improvement in accuracy (R2 = 0.85) and stability. Such results can be applied in most, if not all, exploration and production practices.


2019 ◽  
Vol 13 (4) ◽  
pp. 334-347
Author(s):  
Liyan Zhao ◽  
Huan Wang ◽  
Jing Wang

Background: Subspace learning-based dimensionality reduction algorithms are important and have been popularly applied in data mining, pattern recognition and computer vision applications. They show the successful dimension reduction when data points are evenly distributed in the high-dimensional space. However, some may distort the local geometric structure of the original dataset and result in a poor low-dimensional embedding while data samples show an uneven distribution in the original space. Methods: In this paper, we propose a supervised dimension reduction method by local neighborhood optimization to disposal the uneven distribution of high-dimensional data. It extends the widely used Locally Linear Embedding (LLE) framework, namely LNOLLE. The method considers the class label of the data to optimize local neighborhood, which achieves better separability inter-class distance of the data in the low-dimensional space with the aim to abstain holding together the data samples of different classes while mapping an uneven distributed data. This effectively preserves the geometric topological structure of the original data points. Results: We use the presented LNOLLE method to the image classification and face recognition, which achieves a good classification result and higher face recognition accuracy compared with existing manifold learning methods including popular supervised algorithms. In addition, we consider the reconstruction of the method to solve noise suppression for seismic image. To the best of our knowledge, this is the first manifold learning approach to solve high-dimensional nonlinear seismic data for noise suppression. Conclusion: The experimental results on forward model and real seismic data show that LNOLLE improves signal to noise ratio of seismic image compared with the widely used Singular Value Decomposition (SVD) filtering method.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. P41-P51 ◽  
Author(s):  
Saleh Al-Dossary ◽  
Kurt J. Marfurt

One of the most accepted geologic models is the relation between reflector curvature and the presence of open and closed fractures. Such fractures, as well as other small discontinuities, are relatively small and below the imaging range of conventional seismic data. Depending on the tectonic regime, structural geologists link open fractures to either Gaussian curvature or to curvature in the dip or strike directions. Reflector curvature is fractal in nature, with different tectonic and lithologic effects being illuminated at the [Formula: see text] and [Formula: see text] scales. Until now, such curvature estimates have been limited to the analysis of picked horizons. We have developed what we feel to be the first volumetric spectral estimates of reflector curvature. We find that the most positive and negative curvatures are the most valuable in the conventional mapping of lineations — including faults, folds, and flexures. Curvature is mathematically independent of, and interpretatively complementary to, the well-established coherence geometric attribute. We find the long spectral wavelength curvature estimates to be of particular value in extracting subtle, broad features in the seismic data such as folds, flexures, collapse features, fault drags, and under- and overmigrated fault terminations. We illustrate the value of these spectral curvature estimates and compare them to other attributes through application to two land data sets — a salt dome from the onshore Louisiana Gulf Coast and a fractured/karsted data volume from Fort Worth basin of North Texas.


Sign in / Sign up

Export Citation Format

Share Document