Computation of dips and azimuths with weighted structural tensor approach

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. V119-V121 ◽  
Author(s):  
Yi Luo ◽  
Yuchun Eugene Wang ◽  
Nasher M. AlBinHassan ◽  
Mohammed N. Alfaraj

The structural tensor method can be used to compute dips and azimuths (i.e., orientation) encased in seismic data. However, this method may produce erratic and uninterpretable orientations when noisy data are encountered. To overcome this difficulty, we incorporate a data-adaptive weighting function to reformulate the gradient structural tensor. In our experiment, the squared instantaneous power is adopted as the weight factor; this can simplify the computation when the instantaneous phase is used as input. The real data examples illustrate that such a weighting function can produce more interpretable and spatially consistent orientations than conventional approaches.

2020 ◽  
Vol 222 (3) ◽  
pp. 1805-1823 ◽  
Author(s):  
Yangkang Chen ◽  
Shaohuan Zu ◽  
Yufeng Wang ◽  
Xiaohong Chen

SUMMARY In seismic data processing, the median filter is usually applied along the structural direction of seismic data in order to attenuate erratic or spike-like noise. The performance of a structure-oriented median filter highly depends on the accuracy of the estimated local slope from the noisy data. When local slope contains significant error, which is usually the case for noisy data, the structure-oriented median filter will still cause severe damages to useful energy. We propose a type of structure-oriented median filter that can effectively attenuate spike-like noise even when the local slope is not accurately estimated, which we call structure-oriented space-varying median filter. A structure-oriented space-varying median filter can adaptively squeeze and stretch the window length of the median filter when applied in the locally flattened dimension of an input seismic data in order to deal with the dipping events caused by inaccurate slope estimation. We show the key difference among different types of median filters in detail and demonstrate the principle of the structure-oriented space-varying median filter method. We apply the structure-oriented space-varying median filter method to remove the spike-like blending noise arising from the simultaneous source acquisition. Synthetic and real data examples show that structure-oriented space-varying median filter can significantly improve the signal preserving performance for curving events in the seismic data. The structure-oriented space-varying median filter can also be easily embedded into an iterative deblending procedure based on the shaping regularization framework and can help obtain much improved deblending performance.


Geophysics ◽  
1987 ◽  
Vol 52 (12) ◽  
pp. 1631-1638 ◽  
Author(s):  
Rakesh Mithal ◽  
Emilio E. Vera

The plane‐wave decomposition and slant stacking of point‐source seismic data are not identical processes; they are, however, related. We have found that the algorithm for slant stacking can be used for plane‐wave decomposition if we apply a weighting function (depending on frequency and offset, and including a π/4 phase shift) before slant stacking, and a p-dependent correction after the slant stacking. This procedure requires only a small extra effort to incorporate the geometrical spreading and phase shift not accounted for by the slant stacking. In this process we use the asymptotic approximation for the zeroth‐order Bessel function. This approximation reduces the number of computations significantly, but it is valid only for ωpx greater than 2 or 3. Using this approximation, we have been able to obtain the correct plane‐wave decomposition of expanding spread profile data for ray parameters as low as 0.03 s/km; for smaller p, the exact Bessel function should be used. We have performed model studies to compare plane‐wave decomposition and slant stacking. Using a possible velocity model for the North Atlantic Transect (NAT) expanding spread profile (ESP 5), we computed synthetic seismograms at a 50 m spacing using the reflectivity method, and then computed the plane‐wave decomposition and slant stacks of these seismograms. On comparing these with the exact τ-p seismograms for this model, we found that the waveforms, the frequency content, and the amplitudes were exactly reproduced in the plane‐wave decomposition, but were significantly different in the slant stacks. We also computed the plane‐wave decomposition and slant stacks of real data (NAT ESP 5). The results in this case show that the amplitudes of deep crustal arrivals in plane‐wave decomposition are higher than in slant stacks, and therefore these arrivals can be identified much better in the plane‐wave decomposition.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 936
Author(s):  
Jianli Shao ◽  
Xin Liu ◽  
Wenqing He

Imbalanced data exist in many classification problems. The classification of imbalanced data has remarkable challenges in machine learning. The support vector machine (SVM) and its variants are popularly used in machine learning among different classifiers thanks to their flexibility and interpretability. However, the performance of SVMs is impacted when the data are imbalanced, which is a typical data structure in the multi-category classification problem. In this paper, we employ the data-adaptive SVM with scaled kernel functions to classify instances for a multi-class population. We propose a multi-class data-dependent kernel function for the SVM by considering class imbalance and the spatial association among instances so that the classification accuracy is enhanced. Simulation studies demonstrate the superb performance of the proposed method, and a real multi-class prostate cancer image dataset is employed as an illustration. Not only does the proposed method outperform the competitor methods in terms of the commonly used accuracy measures such as the F-score and G-means, but also successfully detects more than 60% of instances from the rare class in the real data, while the competitors can only detect less than 20% of the rare class instances. The proposed method will benefit other scientific research fields, such as multiple region boundary detection.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Author(s):  
STEFANO MERLER ◽  
BRUNO CAPRILE ◽  
CESARE FURLANELLO

In this paper, we propose a regularization technique for AdaBoost. The method implements a bias-variance control strategy in order to avoid overfitting in classification tasks on noisy data. The method is based on a notion of easy and hard training patterns as emerging from analysis of the dynamical evolutions of AdaBoost weights. The procedure consists in sorting the training data points by a hardness measure, and in progressively eliminating the hardest, stopping at an automatically selected threshold. Effectiveness of the method is tested and discussed on synthetic as well as real data.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. C229-C237 ◽  
Author(s):  
Shibo Xu ◽  
Alexey Stovas

The moveout approximations are commonly used in seismic data processing such as velocity analysis, modeling, and time migration. The anisotropic effect is very obvious for a converted wave when estimating the physical and processing parameters from the real data. To approximate the traveltime in an elastic orthorhombic (ORT) medium, we defined an explicit rational-form approximation for the traveltime of the converted [Formula: see text]-, [Formula: see text]-, and [Formula: see text]-waves. To obtain the expression of the coefficients, the Taylor-series approximation is applied in the corresponding vertical slowness for three pure-wave modes. By using the effective model parameters for [Formula: see text]-, [Formula: see text]-, and [Formula: see text]-waves, the coefficients in the converted-wave traveltime approximation can be represented by the anisotropy parameters defined in the elastic ORT model. The accuracy in the converted-wave traveltime for three ORT models is illustrated in numerical examples. One can see from the results that, for converted [Formula: see text]- and [Formula: see text]-waves, our rational-form approximation is very accurate regardless of the tested ORT model. For a converted [Formula: see text]-wave, due to the existence of cusps, triplications, and shear singularities, the error is relatively larger compared with PS-waves.


Energies ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 3074 ◽  
Author(s):  
Shulin Pan ◽  
Ke Yan ◽  
Haiqiang Lan ◽  
José Badal ◽  
Ziyu Qin

Conventional sparse spike deconvolution algorithms that are based on the iterative shrinkage-thresholding algorithm (ISTA) are widely used. The aim of this type of algorithm is to obtain accurate seismic wavelets. When this is not fulfilled, the processing stops being optimum. Using a recurrent neural network (RNN) as deep learning method and applying backpropagation to ISTA, we have developed an RNN-like ISTA as an alternative sparse spike deconvolution algorithm. The algorithm is tested with both synthetic and real seismic data. The algorithm first builds a training dataset from existing well-logs seismic data and then extracts wavelets from those seismic data for further processing. Based on the extracted wavelets, the new method uses ISTA to calculate the reflection coefficients. Next, inspired by the backpropagation through time (BPTT) algorithm, backward error correction is performed on the wavelets while using the errors between the calculated reflection coefficients and the reflection coefficients corresponding to the training dataset. Finally, after performing backward correction over multiple iterations, a set of acceptable seismic wavelets is obtained, which is then used to deduce the sequence of reflection coefficients of the real data. The new algorithm improves the accuracy of the deconvolution results by reducing the effect of wrong seismic wavelets that are given by conventional ISTA. In this study, we account for the mechanism and the derivation of the proposed algorithm, and verify its effectiveness through experimentation using theoretical and real data.


Geophysics ◽  
2008 ◽  
Vol 73 (1) ◽  
pp. S17-S26 ◽  
Author(s):  
Daniel A. Rosales ◽  
Sergey Fomel ◽  
Biondo L. Biondi ◽  
Paul C. Sava

Wavefield-extrapolation methods can produce angle-domain common-image gathers (ADCIGs). To obtain ADCIGs for converted-wave seismic data, information about the image dip and the P-to-S velocity ratio must be included in the computation of angle gathers. These ADCIGs are a function of the half-aperture angle, i.e., the average between the incidence angle and the reflection angle. We have developed a method that exploits the robustness of computing 2D isotropic single-mode ADCIGs and incorporates both the converted-wave velocity ratio [Formula: see text] and the local image dip field. It also maps the final converted-wave ADCIGs into two ADCIGs, one a function of the P-incidence angle and the other a function of the S-reflection angle. Results with both synthetic and real data show the practical application for converted-wave ADCIGs. The proposed approach is valid in any situation as long as the migration algorithm is based on wavefield downward continuation and the final prestack image is a function of the horizontal subsurface offset.


Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.


Sign in / Sign up

Export Citation Format

Share Document