Application of wavelet threshold denoising method in gravity data processing

Author(s):  
Ping Yan ◽  
Yangang Wu
2013 ◽  
Vol 341-342 ◽  
pp. 999-1004
Author(s):  
Wei Zhou ◽  
Ti Jing Cai

For low-pass filtering of airborne gravity data processing, elliptic low-pass digital filters were designed and filtering influences of the elliptic filter order, upper limit passband frequency, maximal passband attenuation and minimal stopband attenuation were studied. The results show that the upper limit passband frequency has the greatest effect on filtering among four parameters; the filter order and the maximal passband attenuation have some influence, but instability will increase with larger order; the effect of the minimal stopband attenuation is not obvious when reaching a certain value, which requires a combination of evaluation indicator accuracy to determine the optimal value. The standard deviations of discrepancies between the elliptic filtered gravity anomaly with optimal parameters and the commercial software result are within 1mGal, and the internal accord accuracy along four survey lines after level adjusting is about 0.620mGal.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


Geophysics ◽  
1981 ◽  
Vol 46 (8) ◽  
pp. 1088-1099 ◽  
Author(s):  
Robert B. Rice ◽  
Samuel J. Allen ◽  
O. James Gant ◽  
Robert N. Hodgson ◽  
Don E. Larson ◽  
...  

Advances in exploration geophysics have continued apace during the last six years. We have entered a new era of exploration maturity which will be characterized by the extension of our technologies to their ultimate limits of precision. In gravity and magnetics, new inertial navigation systems permit the very rapid helicopter‐supported land acquisition of precise surface gravity data which is cost‐effective in regions of severe topography. Considerable effort is being expended to obtain airborne gravity data via helicopter which is of exploration quality. Significant progress has also been made in processing and interpreting potential field data. The goal of deriving the maximum amount of accurate subsurface information from seismic data has led to much more densely sampled and precise 2- and 3-D land data acquisition techniques. Land surveying accuracy has been greatly improved. The number of individually recorded detector channels has been increased dramatically (up to 1024) in order to approximate much more accurately a point‐source, point‐detector system. Much more powerful compressional‐wave vibrators can now maintain full force while sweeping up or down from 5 Hz to over 200 Hz. In marine surveying, new streamer cables and shipboard instrumentation permit the recording and limited processing of 96 to 480 channels. Improvements have also been made in marine sources and arrays. The most important developments in seismic data processing—wave‐equation based imaging and inversion methods—may be the forerunners of a totally new processing methodology. Wave‐equation methods have been formulated for migration before and after stack, multiples suppression, datum and replacement statics, velocity estimation, and seismic inversion. Inversion techniques which provide detailed acoustic‐impedance or velocity estimates have found widespread commercial application. Wavelet processing has greatly expanded our stratigraphic analysis capabilities. Much more sophisticated 1-, 2-, and 3-D modeling techniques are being used effectively to guide data acquisition and processing, as direct interpretation aids, and to teach basic interpretation concepts. Some systems can now handle vertical and lateral velocity changes, inelastic attenuation, curved reflection horizons, transitional boundaries, time‐variant waveforms, ghosting, multiples, and array‐response effects. Improved seismic display formats and the extensive use of color have been valuable in data processing, modeling, and interpretation. Stratigraphic interpretation has evolved into three major categories: (1) macrostratigraphy, where regional and basinal depositional patterns are analyzed to describe the broad geologic depositional environment; (2) qualitative stratigraphy, where specific rock units and their properties are analyzed qualitatively to delineate lithology, porosity, structural setting, and areal extent and shape; and (3) quantitative stratigraphy, where anomalies are mapped at a specific facies level to define net porosity‐feet distribution, gas‐fluid contacts, and probable pore fill. In essence, what began as direct hydrocarbon‐indicator technology applicable primarily to Upper Tertiary clastics has now matured to utility in virtually every geologic province. Considerable effort has been expended on the direct generation and recording of shear waves in an attempt to obtain more information about stratigraphy, porosity, and oil and gas saturation. Seismic service companies now offer shear‐wave prospecting using vibrator, horizontal‐impact, or explosive sources. Well logging has seen the acceleration of computerization. Wellsite tape recorders and minicomputers with relatively simple interpretation algorithms are routinely available. More sophisticated computerized interpretation methods are offered as a service at data processing centers.


2020 ◽  
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira ◽  
Valéria C. F. Barbosa

2013 ◽  
pp. 122-174 ◽  
Author(s):  
William J. Hinze ◽  
Ralph R. B. von Frese ◽  
Afif H. Saad
Keyword(s):  

2019 ◽  
Vol 11 (15) ◽  
pp. 1758 ◽  
Author(s):  
Qianxin Wang ◽  
Chao Hu ◽  
Kefei Zhang

The accuracy of ultra-rapid orbits is a key parameter for the performance of GNSS (Global Navigation Satellite System) real-time or near real-time precise positioning applications. The quality of the current BeiDou demonstration system (BDS) ultra-rapid orbits is lower than that of GPS, especially for the new generational BDS-3 satellites due to the fact that the availability of the number of ground tracking stations is limited, the geographic distribution of these stations is poor, and the data processing strategies adopted are not optimal. In this study, improved data processing strategies for the generation of ultra-rapid orbits of BDS-2/BDS-3 satellites are investigated. This includes both observed and predicted parts of the orbit. First, the predicted clock offsets are taken as constraints in the estimation process to reduce the number of the unknown parameters and improve the accuracy of the parameter estimates of the orbit. To obtain more accurate predicted clock offsets for the BDS’ orbit determination, a denoising method (also called the Tikhonov regularization algorithm), inter-satellite correlation, and the partial least squares method are all incorporated into the clock offsets prediction model. Then, the Akaike information criterion (AIC) is used to determine the arc length in the estimation models by taking the optimal arc length in the estimation of the initial orbit states into consideration. Finally, a number of experiments were conducted to evaluate the performance of the ultra-rapid orbits resulting from the proposed methods. Results showed that: (1) Compared with traditional models, the accuracy improvement of the predicted clock offsets from the proposed methods were 40.5% and 26.1% for BDS-2 and BDS-3, respectively; (2) the observed part of the orbits can be improved 9.2% and 5.0% for BDS-2 and BDS-3, respectively, by using the predicted clock offsets as constraints; (3) the accuracy of the predicted part of the orbits showed a high correlation with the AIC value, and the accuracy of the predicted orbits could be improved up to 82.2%. These results suggest that the approaches proposed in this study can significantly enhance the accuracy of the ultra-rapid orbits of BDS-2/BDS-3 satellites.


Author(s):  
Yanxue Wang ◽  
Jiawei Xiang ◽  
Jiang Zhansi ◽  
Yang Lianfa ◽  
Zhengjia He

Vibration signals are usually affected by noise, which is in turn related to the measurement and data processing procedures. This paper presents a new subband adaptive denoising method for detective impulsive signatures based on minimum description length principle with improved normalized maximum likelihood density model. The threshold of the proposed denoising method is determined automatically without the need to estimate the noise variance. The effectiveness of the proposed denoising method over VisuaShrink, BayesShrink and minimum description length denoising methods are given through simulation and practical applications.


2021 ◽  
Vol 16 (2) ◽  
pp. 303-311
Author(s):  
Cheng Le

Computer technology and sensor technology can be combined. The technology set can be used to monitor the concentration of heavy metals in soil, which can help to prevent the occurrence of heavy metal pollution in time. First, nanotechnology, electrode polarization and the advantages of gold nanoparticles modified electrode are studied, and the design method of the nano electrode array is further analyzed. Also, the internal parameters of the three-electrode equivalent circuit are studied, and the model of the three-electrode equivalent circuit is derived. On this basis, a heavy metal monitoring circuit based on the nano electrode array sensor is designed. While the information monitoring based on this circuit is performed, wavelet domain denoising technology is studied in data processing. In view of the defects of the general hard threshold in practical application, the threshold is improved to recognize the depth of denoising. In the experiment, gold nanoparticles modified mercury electrode is used as working electrode. According to the principle that the precipitation time is inversely proportional to the detection current, 0.01 mol/L HCl is selected as the solution environment; moreover, it is set that pH=4 and the precipitation time is 4 min. The results show that for the same kind of ions, with the increase of the concentration of ions to be measured, the scanning potential range remains unchanged, while the peak current increases significantly. Metal ions can be effectively identified based on the potential corresponding to peak value. In the data processing of the detection circuit, the improved signal denoising method is compared with the default threshold wavelet domain denoising technology. The results show that the improved wavelet domain denoising method has less signal error, and the denoising effect of heavy metal detection is obvious.


2020 ◽  
Vol 222 (2) ◽  
pp. 1224-1235
Author(s):  
Yang Yang ◽  
Chunyu Liu ◽  
Charles A Langston

SUMMARY Obtaining reliable empirical Green's functions (EGFs) from ambient noise by seismic interferometry requires homogeneously distributed noise sources. However, it is difficult to attain this condition since ambient noise data usually contain highly correlated signals from earthquakes or other transient sources from human activities. Removing these transient signals is one of the most essential steps in the whole data processing flow to obtain EGFs. We propose to use a denoising method based on the continuous wavelet transform to achieve this goal. The noise level is estimated in the wavelet domain for each scale by determining the 99 per cent confidence level of the empirical probability density function of the noise wavelet coefficients. The correlated signals are then removed by an efficient soft thresholding method. The same denoising algorithm is also applied to remove the noise in the final stacked cross-correlogram. A complete data processing workflow is provided with the overall data processing procedure divided into four stages: (1) single station data preparation, (2) removal of earthquakes and other transient signals in the seismic record, (3) spectrum whitening, cross-correlation and temporal stacking and (4) remove the noise in the stacked cross-correlogram to deliver the final EGF. The whole process is automated to make it accessible for large data sets. Synthetic data constructed with a recorded earthquake and recorded ambient noise is used to test the denoising method. We then apply the new processing workflow to data recorded by the USArray Transportable Array stations near the New Madrid Seismic Zone where many seismic events and transient signals are observed. We compare the EGFs calculated from our workflow with commonly used time domain normalization method and our results show improved signal-to-noise ratios. The new workflow can deliver reliable EGFs for further studies.


Sign in / Sign up

Export Citation Format

Share Document