Improved eigenvalue-based coherence algorithm with dip scanning

Geophysics ◽  
2017 ◽  
Vol 82 (2) ◽  
pp. V95-V103 ◽  
Author(s):  
Binpeng Yan ◽  
Sanyi Yuan ◽  
Shangxu Wang ◽  
Yonglin OuYang ◽  
Tieyi Wang ◽  
...  

Detection and identification of subsurface anomalous structures are key objectives in seismic exploration. The coherence technique has been successfully used to identify geologic abnormalities and discontinuities, such as faults and unconformities. Based on the classic third eigenvalue-based coherence ([Formula: see text]) algorithm, we make several improvements and develop a new method to construct covariance matrix using the original and Hilbert transformed seismic traces. This new covariance matrix more readily converges to the main effective signal energy on the largest eigenvalue by decreasing all other eigenvalues. Compared with the conventional coherence algorithms, our algorithm has higher resolution and better noise immunity ability. Next, we incorporate this new eigenvalue-based algorithm with time-lag dip scanning to relieve the dip effect and highlight the discontinuities. Application on 2D synthetic data demonstrates that our coherence algorithm favorably alleviates the low-valued artifacts caused by linear and curved dipping strata and clearly reveals the discontinuities. The coherence results of 3D real field data also commendably suppress noise, eliminate the influence of large dipping strata, and highlight small hidden faults. With the advantages of higher resolution and robustness to random noise, our strategy successfully achieves the goal of detecting the distribution of discontinuities.

2020 ◽  
Vol 50 (2) ◽  
pp. 161-199
Author(s):  
Mohamed GOBASHY ◽  
Maha ABDELAZEEM ◽  
Mohamed ABDRABOU

The difficulties in unravelling the tectonic structures, in some cases, prevent the understanding of the ore bodies' geometry, leading to mistakes in mineral exploration, mine planning, evaluation of ore deposits, and even mineral exploitation. For that reason, many geophysical techniques are introduced to reveal the type, dimension, and geometry of these structures. Among them, electric methods, self-potential, electromagnetic, magnetic and gravity methods. Global meta-heuristic technique using Whale Optimization Algorithm (WOA) has been utilized for assessing model parameters from magnetic anomalies due to a thin dike, a dipping dike, and a vertical fault like/shear zone geological structure. These structures are commonly associated with mineralization. This modern algorithm was firstly applied on a free-noise synthetic data and to a noisy data with three different levels of random noise to simulate natural and artificial anomaly disturbances. Good results obtained through the inversion of such synthetic examples prove the validity and applicability of our algorithm. Thereafter, the method is applied to real case studies taken from different ore mineralization resembling different geologic conditions. Data are taken from Canada, United States, Sweden, Peru, India, and Australia. The obtained results revealed good correlation with previous interpretations of these real field examples.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. F41-F48 ◽  
Author(s):  
Yang Liu ◽  
Bingxiu Li

In seismic exploration, there are many sources of random noise, for example, scattering from a complex surface. Prediction filters (PFs) have been widely used for random noise attenuation, but these typically assume that the seismic signal is stationary. Seismic signals are fundamentally nonstationary. Stationary PFs fail in the presence of nonstationary events, even if the data are cut into overlapping windows (“patching”). We have developed an adaptive PF method based on streaming and orthogonalization for random noise attenuation in the [Formula: see text]-[Formula: see text] domain. Instead of using patching or regularization, the streaming orthogonal PF (SOPF) takes full advantage of the streaming method, which generates the signal value as each new noisy data value arrives. The streaming signal-and-noise orthogonalization further improves the signal recovery ability of the SOPF. The streaming characteristic makes the proposed method faster than iterative approaches. In comparison with [Formula: see text]-[Formula: see text] deconvolution and [Formula: see text]-[Formula: see text] regularized nonstationary autoregression, we tested the feasibility of the proposed method in attenuating random noise on two synthetic data sets. Field-data examples confirm that the [Formula: see text]-[Formula: see text] SOPF has a reasonable denoising ability in practice.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lei Hao ◽  
Shuai Cao ◽  
Pengfei Zhou ◽  
Lei Chen ◽  
Yi Zhang ◽  
...  

In view of the key problem that a large amount of noise in seismic data can easily induce false anomalies and interpretation errors in seismic exploration, the time-frequency spectrum subtraction (TF-SS) method is adopted into data processing to reduce random noise in seismic data. On this basis, the main frequency information of seismic data is calculated and used to optimize the filtering coefficients. According to the characteristics of effective signal duration between seismic data and voice data, the time-frequency spectrum selection method and filtering coefficient are modified. In addition, simulation tests were conducted by using different S/R, which indicates the effectiveness of the TF-SS in removing the random noise.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. V351-V368 ◽  
Author(s):  
Xiaojing Wang ◽  
Bihan Wen ◽  
Jianwei Ma

Weak signal preservation is critical in the application of seismic data denoising, especially in deep seismic exploration. It is hard to separate those weak signals in seismic data from random noise because it is less compressible or sparsifiable, although they are usually important for seismic data analysis. Conventional sparse coding models exploit the local sparsity through learning a union of basis, but it does not take into account any prior information about the internal correlation of patches. Motivated by an observation that data patches within a group are expected to share the same sparsity pattern in the transform domain, so-called group sparsity, we have developed a novel transform learning with group sparsity (TLGS) method that jointly exploits local sparsity and internal patch self-similarity. Furthermore, for weak signal preservation, we extended the TLGS method and developed the transform learning with external reference. External clean or denoised patches are applied as the anchored references, which are grouped together with similar corrupted patches. They are jointly modeled under a sparse transform, which is adaptively learned. This is achieved by jointly learning a subset of the transform for each group data. Our method achieves better denoising performance than existing denoising methods, in terms of signal-to-noise ratio values and visual preservation of weak signal. Comparisons of experimental results on one synthetic data and three field data using the [Formula: see text]-[Formula: see text] deconvolution method and the data-driven tight frame method are also provided.


2021 ◽  
Vol 13 (15) ◽  
pp. 2967
Author(s):  
Nicola Acito ◽  
Marco Diani ◽  
Gregorio Procissi ◽  
Giovanni Corsini

Atmospheric compensation (AC) allows the retrieval of the reflectance from the measured at-sensor radiance and is a fundamental and critical task for the quantitative exploitation of hyperspectral data. Recently, a learning-based (LB) approach, named LBAC, has been proposed for the AC of airborne hyperspectral data in the visible and near-infrared (VNIR) spectral range. LBAC makes use of a parametric regression function whose parameters are learned by a strategy based on synthetic data that accounts for (1) a physics-based model for the radiative transfer, (2) the variability of the surface reflectance spectra, and (3) the effects of random noise and spectral miscalibration errors. In this work we extend LBAC with respect to two different aspects: (1) the platform for data acquisition and (2) the spectral range covered by the sensor. Particularly, we propose the extension of LBAC to spaceborne hyperspectral sensors operating in the VNIR and short-wave infrared (SWIR) portion of the electromagnetic spectrum. We specifically refer to the sensor of the PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission, and the recent Earth Observation mission of the Italian Space Agency that offers a great opportunity to improve the knowledge on the scientific and commercial applications of spaceborne hyperspectral data. In addition, we introduce a curve fitting-based procedure for the estimation of column water vapor content of the atmosphere that directly exploits the reflectance data provided by LBAC. Results obtained on four different PRISMA hyperspectral images are presented and discussed.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


2019 ◽  
Vol 10 (1) ◽  
pp. 73 ◽  
Author(s):  
Einar Agletdinov ◽  
Dmitry Merson ◽  
Alexei Vinogradov

A novel methodology is proposed to enhance the reliability of detection of low amplitude transients in a noisy time series. Such time series often arise in a wide range of practical situations where different sensors are used for condition monitoring of mechanical systems, integrity assessment of industrial facilities and/or microseismicity studies. In all these cases, the early and reliable detection of possible damage is of paramount importance and is practically limited by detectability of transient signals on the background of random noise. The proposed triggering algorithm is based on a logarithmic derivative of the power spectral density function. It was tested on the synthetic data, which mimics the actual ultrasonic acoustic emission signal recorded continuously with different signal-to-noise ratios (SNR). Considerable advantages of the proposed method over established fixed amplitude threshold and STA/LTA (Short Time Average / Long Time Average) techniques are demonstrated in comparative tests.


Geophysics ◽  
1988 ◽  
Vol 53 (5) ◽  
pp. 707-720 ◽  
Author(s):  
Dave Deming ◽  
David S. Chapman

The present day temperature field in a sedimentary basin is a constraint on the maturation of hydro‐carbons; this temperature field may be estimated by inverting corrected bottom‐hole temperature (BHT) data. Thirty‐two BHTs from the Pineview oil field are corrected for drilling disturbances by a Horner plot and inverted for the geothermal gradient in nine formations. Both least‐squares [Formula: see text] norm and uniform [Formula: see text] norm inversions are used; the [Formula: see text] norm is found to be more robust for the Pineview data. The inversion removes random error from the corrected BHT data by partitioning scatter between noise associated with the BHT measurement and correction processes and local variations in the geothermal gradient. Three‐hundred thermal‐conductivity and density measurements on drill cuttings are used, together with formation density logs, to estimate the in situ thermal conductivity of six of the nine formations. The thermal‐conductivity estimates are used in a finite‐element model to evaluate 2-D conductive heat refraction and, for a series of inversions of synthetic data, to assess the influence of systematic and random noise on the inversion results. A temperature‐anomaly map illustrates that a temperature field calculated by a forward application of the inversion results has less error than any single corrected BHT. Mean background heat flow at Pineview is found to be [Formula: see text] (±13 percent), but is locally higher [Formula: see text] due to heat refraction. The BHT inversion (1) is limited by systematic noise or model error, (2) achieves excellent resolution of a temperature field although resolution of individual formation gradients may be poor, and (3) generally cannot detect lateral variations in heat flow unless thermal‐conductivity structure is constrained.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. KS127-KS138 ◽  
Author(s):  
Yujin Liu ◽  
Yue Ma ◽  
Yi Luo

Locating microseismic source positions using seismic energy emitted from hydraulic fracturing is essential for choosing optimal fracking parameters and maximizing the fracturing effects in hydrocarbon exploitation. Interferometric crosscorrelation migration (ICCM) and zero-lag autocorrelation of time-reversal imaging (ATRI) are two important passive seismic source locating approaches that are proposed independently and seem to be substantially different. We have proven that these two methods are theoretically identical and produce very similar images. Moreover, we have developed cross-coherence that uses normalization by the spectral amplitude of each of the traces, rather than crosscorrelation or deconvolution, to improve the ICCM and ATRI methods. The adopted method enhances the spatial resolution of the source images and is particularly effective in the presence of highly variable and strong additive random noise. Synthetic and field data tests verify the equivalence of the conventional ICCM and ATRI and the equivalence of their improved versions. Compared with crosscorrelation- and deconvolution-based source locating methods, our approach shows a high-resolution property and antinoise capability in numerical tests using synthetic data with single and multiple sources, as well as field data.


Sign in / Sign up

Export Citation Format

Share Document