Coherence attribute applications on seismic data in various guises — Part 2

2018 ◽  
Vol 6 (3) ◽  
pp. T531-T541 ◽  
Author(s):  
Satinder Chopra ◽  
Kurt J. Marfurt

We have previously discussed some alternative means of modifying the frequency spectrum of the input seismic data to modify the resulting coherence image. The simplest method was to increase the high-frequency content by computing the first and second derivatives of the original seismic amplitudes. We also evaluated more sophisticated techniques, including the application of structure-oriented filtering to different spectral components before spectral balancing, thin-bed reflectivity inversion, bandwidth extension, and the amplitude volume technique. We further examine the value of coherence computed from individual spectral voice components, and alternative means of combining three or more such coherence images, providing a single volume for interpretation.

Geophysics ◽  
2017 ◽  
Vol 82 (5) ◽  
pp. P61-P73 ◽  
Author(s):  
Lasse Amundsen ◽  
Ørjan Pedersen ◽  
Are Osen ◽  
Johan O. A. Robertsson ◽  
Martin Landrø

The source depth influences the frequency band of seismic data. Due to the source ghost effect, it is advantageous to deploy sources deep to enhance the low-frequency content of seismic data. But, for a given source volume, the bubble period decreases with the source depth, thereby degrading the low-frequency content. At the same time, deep sources reduce the seismic bandwidth. Deploying sources at shallower depths has the opposite effects. A shallow source provides improved high-frequency content at the cost of degraded low-frequency content due to the ghosting effect, whereas the bubble period increases with a lesser source depth, thereby slightly improving the low-frequency content. A solution to the challenge of extending the bandwidth on the low- and high-frequency side is to deploy over/under sources, in which sources are towed at two depths. We have developed a mathematical ghost model for over/under point sources fired in sequential and simultaneous modes, and we have found an inverse model, which on common receiver gathers can jointly perform designature and deghosting of the over/under source measurements. We relate the model for simultaneous mode shooting to recent work on general multidepth level array sources, with previous known solutions. Two numerical examples related to over/under sequential shooting develop the main principles and the viability of the method.


2020 ◽  
Vol 8 (2) ◽  
pp. T217-T229
Author(s):  
Yang Mu ◽  
John Castagna ◽  
Gabriel Gil

Sparse-layer reflectivity inversion decomposes a seismic trace into a limited number of simple layer responses and their corresponding reflection coefficients for top and base reflections. In contrast to sparse-spike inversion, the applied sparsity constraint is less biased against layer thickness and can thus better resolve thin subtuning layers. Application to a 3D seismic data set in Southern Alberta produces inverted impedances that have better temporal resolution and lateral stability and a less blocky appearance than sparse-spike inversion. Bandwidth extension harmonically extrapolated the frequency spectra of the inverted layers and nearly doubled the usable bandwidth. Although the prospective glauconitic sand tunes at approximately 37 m, bandwidth extension reduced the tuning thickness to 22 m. Bandwidth-extended data indicate a higher correlation with synthetic traces than the original seismic data and reveal features below the original tuning thickness. After bandwidth extension, the channel top and base are more evident on inline and crossline profiles. Lateral facies changes interpreted from the inverted acoustic impedance of the bandwidth-extended data are consistent with observations in wells.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. W1-W16 ◽  
Author(s):  
Chen Liang ◽  
John Castagna ◽  
Ricardo Zavala Torres

Various postprocessing methods can be applied to seismic data to extend the spectral bandwidth and potentially increase the seismic resolution. Frequency invention techniques, including phase acceleration and loop reconvolution, produce spectrally broadened seismic sections but arbitrarily create high frequencies without a physical basis. Tests in extending the bandwidth of low-frequency synthetics using these methods indicate that the invented frequencies do not tie high-frequency synthetics generated from the same reflectivity series. Furthermore, synthetic wedge models indicate that the invented high-frequency seismic traces do not improve thin-layer resolution. Frequency invention outputs may serve as useful attributes, but they should not be used for quantitative work and do not improve actual resolution. On the other hand, under appropriate circumstances, layer frequency responses can be extrapolated to frequencies outside the band of the original data using spectral periodicities determined from within the original seismic bandwidth. This can be accomplished by harmonic extrapolation. For blocky earth structures, synthetic tests show that such spectral extrapolation can readily double the bandwidth, even in the presence of noise. Wedge models illustrate the resulting resolution improvement. Synthetic tests suggest that the more complicated the earth structure, the less valid the bandwidth extension that harmonic extrapolation can achieve. Tests of the frequency invention methods and harmonic extrapolation on field seismic data demonstrate that (1) the frequency invention methods modify the original seismic band such that the original data cannot be recovered by simple band-pass filtering, whereas harmonic extrapolation can be filtered back to the original band with good fidelity and (2) harmonic extrapolation exhibits acceptable ties between real and synthetic seismic data outside the original seismic band, whereas frequency invention methods have unfavorable well ties in the cases studied.


Geophysics ◽  
2003 ◽  
Vol 68 (3) ◽  
pp. 1032-1042 ◽  
Author(s):  
Sergey Fomel

Stacking operators are widely used in seismic imaging and seismic data processing. Examples include Kirchhoff datuming, migration, offset continuation, dip moveout, and velocity transform. Two primary approaches exist for inverting such operators. The first approach is iterative least‐squares optimization, which involves the construction of the adjoint operator. The second approach is asymptotic inversion, where an approximate inverse operator is constructed in the high‐frequency asymptotics. Adjoint and asymptotic inverse operators share the same kinematic properties, but their amplitudes (weighting functions) are defined differently. This paper describes a theory for reconciling the two approaches. I introduce a pair of asymptotic pseudounitary operators, which possess both the property of being adjoint and the property of being asymptotically inverse. The weighting function of the asymptotic pseudounitary stacking operators is shown to be completely defined by the derivatives of the operator kinematics. I exemplify the general theory by considering several particular examples of stacking operators. Simple numerical experiments demonstrate a noticeable gain in efficiency when the asymptotic pseudounitary operators are applied for preconditioning iterative least‐squares optimization.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. P1-P8 ◽  
Author(s):  
Saleh Al-Dossary ◽  
Kurt J. Marfurt

Recently developed seismic attributes such as volumetric curvature and amplitude gradients enhance our ability to detect lineaments. However, because these attributes are based on derivatives of either dip and azimuth or the seismic data themselves, they can also enhance high-frequency noise. Recently published structure-oriented filtering algorithms show that noise in seismic data can be removed along reflectors while preserving major structural and stratigraphic discontinuities. In one implementation, the smoothing process tries to select the most homogenous window from a suite of candidate windows containing the analysis point. A second implementation damps the smoothing operation if a discontinuity is detected. Unfortunately, neither of these algorithms preserves thin or small lineaments that are only one voxel in width. To overcome this defect, we evaluate a suite of nonlinear feature-preserving filters developed in the image-processing and synthetic aperture radar (SAR) world and apply them to both synthetic and real 3D dip-and-azimuth volumes of fractured geology from the Forth Worth Basin, USA. We find that the multistage, median-based, modified trimmed-mean algorithm preserves narrow geologically significant features of interest, while suppressing random noise and acquisition footprint.


2017 ◽  
Vol 919 (1) ◽  
pp. 7-12
Author(s):  
N.A Sorokin

The method of the geopotential parameters determination with the use of the gradiometry data is considered. The second derivative of the gravitational potential in the correction equation on the rectangular coordinates x, y, z is used as a measured variable. For the calculated value of the measured quantity required for the formation of a free member of the correction equation, the the Cunningham polynomials were used. We give algorithms for computing the second derivatives of the Cunningham polynomials on rectangular coordinates x, y, z, which allow to calculate the second derivatives of the geopotential at the rectangular coordinates x, y, z.Then we convert derivatives obtained from the Cartesian coordinate system in the coordinate system of the gradiometer, which allow to calculate the free term of the correction equation. Afterwards the correction equation coefficients are calculated by differentiating the formula for calculating the second derivative of the gravitational potential on the rectangular coordinates x, y, z. The result is a coefficient matrix of the correction equations and corrections vector of the free members of equations for each component of the tensor of the geopotential. As the number of conditional equations is much more than the number of the specified parameters, we go to the drawing up of the system of normal equations, from which solutions we determine the required corrections to the harmonic coefficients.


Filomat ◽  
2017 ◽  
Vol 31 (4) ◽  
pp. 1009-1016 ◽  
Author(s):  
Ahmet Akdemir ◽  
Özdemir Emin ◽  
Ardıç Avcı ◽  
Abdullatif Yalçın

In this paper, firstly we prove an integral identity that one can derive several new equalities for special selections of n from this identity: Secondly, we established more general integral inequalities for functions whose second derivatives of absolute values are GA-convex functions based on this equality.


1985 ◽  
Vol 50 (4) ◽  
pp. 791-798 ◽  
Author(s):  
Vilém Kodýtek

The McMillan-Mayer (MM) free energy per unit volume of solution AMM, is employed as a generating function of the MM system of thermodynamic quantities for solutions in the state of osmotic equilibrium with pure solvent. This system can be defined by replacing the quantities G, T, P, and m in the definition of the Lewis-Randall (LR) system by AMM, T, P0, and c (P0 being the pure solvent pressure). Following this way the LR to MM conversion relations for the first derivatives of the free energy are obtained in a simple form. New relations are derived for its second derivatives.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
D Garcia Iglesias ◽  
J.M Rubin Lopez ◽  
D Perez Diez ◽  
C Moris De La Tassa ◽  
F.J De Cos Juez ◽  
...  

Abstract Introduction The Signal Averaged ECG (SAECG) is a classical method forSudden Cardiac Death (SCD) risk assessment, by means of Late Potentials (LP) in the filtered QRS (fQRS)[1]. But it is highly dependent on noise and require long time records, which make it tedious to use. Wavelet Continuous Transform (WCT) meanwhile is easier to use, and may let us to measure the High Frequency Content (HFC) of the QRS and QT intervals, which also correlates with the risk of SCD [2,3]. Whether the HFC of the QRS and QT measured with the WCT is a possible subrogate of LP, has never been demonstrated. Objective To demonstrate if there is any relationship between the HFC measured with the WCT and the LP analyzed with the SAECG. Methods Data from 50 consecutive healthy individuals. The standard ECG was digitally collected for 3 consecutive minutes. For the WCT Analysis 8 consecutive QT complexes were used and for the SAECG Analysis all available QRS were used. The time-frequency data of each QT complex were collected using the WCT as previously described [3] and the Total, QRS and QT power were obtained from each patient. For the SAECG, bipolar X, Y and Z leads were used with a bidirectional filter at 40 to 250 Hz [1]. LP were defined as less than 0.05 z in the terminal part of the filtered QRS and the duration (SAECG LP duration) and root mean square (SAECG LP Content) of this LP were calculated. Pearson's test was used to correlate the Power content with WCT analysis and the LP in the SAECG. Results There is a strong correlation between Total Power and the SAECG LP content (r=0.621, p<0.001). Both ST Power (r=0.567, p<0.001) and QRS Power (r=0.404, p=0.004) are related with the SAECG LP content. No correlation were found between the Power content (Total, QRS or ST Power) and the SAECG LP duration. Also no correlation was found between de SAECG LP content and duration. Conclusions Total, QRS and ST Power measured with the WCT are good surrogates of SAECG LP content. No correlation were found between WCT analysis and the SAECG LP duration. Also no correlation was found between the SAECG LP content and duration. This can be of high interest, since WCT is an easier technique, not needing long recordings and being less affected by noise. Funding Acknowledgement Type of funding source: None


1990 ◽  
Vol 112 (1) ◽  
pp. 83-87 ◽  
Author(s):  
R. H. Fries ◽  
B. M. Coffey

Solution of rail vehicle dynamics models by means of numerical simulation has become more prevalent and more sophisticated in recent years. At the same time, analysts and designers are increasingly interested in the response of vehicles to random rail irregularities. The work described in this paper provides a convenient method to generate random vertical and crosslevel irregularities when their time histories are required as inputs to a numerical simulation. The solution begins with mathematical models of vertical and crosslevel power spectral densities (PSDs) representing PSDs of track classes 4, 5, and 6. The method implements state-space models of shape filters whose frequency response magnitude squared matches the desired PSDs. The shape filters give time histories possessing the proper spectral content when driven by white noise inputs. The state equations are solved directly under the assumption that the white noise inputs are constant between time steps. Thus, the state transition matrix and the forcing matrix are obtained in closed form. Some simulations require not only vertical and crosslevel alignments, but also the first and occasionally the second derivatives of these signals. To accommodate these requirements, the first and second derivatives of the signals are also generated. The responses of the random vertical and crosslevel generators depend upon vehicle speed, sample interval, and track class. They possess the desired PSDs over wide ranges of speed and sample interval. The paper includes a comparison between synthetic and measured spectral characteristics of class 4 track. The agreement is very good.


Sign in / Sign up

Export Citation Format

Share Document