Data Mining and Knowledge Discovery Based on Denoising Algorithms in Geology Exploration

2013 ◽  
Vol 310 ◽  
pp. 640-643
Author(s):  
Xue Hao ◽  
Lin Ren ◽  
Na Li ◽  
Zhi Cheng Huang

There are mass data in geology exploration, but it is vital to find useful information or knowledge from these data. This paper is concerned with the analysis of the seismic data by the multi-channel wiener filter algorithm and the wavelet denoising method using neighboring coefficients. Known the velocity of reflection event, utilizes the resemblance of reflection signals in each seismic trace, the multi-channel wiener filter algorithm is effective in enhance reflection events and suppress the random noise. But the wavelet denoising methods don’t need any assuming conditions. The computed simulations of these two kinds of algorithms are provided to prove the availability.

2014 ◽  
Vol 889-890 ◽  
pp. 766-769
Author(s):  
Bo Huang ◽  
Peng Jiao Sun ◽  
Xiao Man Wang

In this paper the application of wavelet in data detection of dynamic testing is chiefly researched , i.e. , by applying the method of wavelet denoising to eliminate the non-stationary random noise which produced in data detection of dynamic testing , by analyzing dynamic testing data to define the optimal wavelet as well as the optimal decomposition scale. Based on actual requirements of dynamic testing system, to reconstruct the data accurately by using FIR filter with biorthogonal wavelet, the method has a favorable effect.


2012 ◽  
Vol 198-199 ◽  
pp. 1501-1505
Author(s):  
Xue Hao ◽  
Na Li ◽  
Lin Ren

Noise reduction or cancellation is important for getting clear and useful signals. This paper deals with the implementation of the multi-channel wiener filter algorithm for noise suppression of seismic data. Known the velocity of reflection event, utilizes the resemblance of reflection signal in each seismic trace, the multi-channel wiener filter algorithm is effective in enhance reflection event and suppress the random noise. This algorithm is used to CDP gathers and the simulation shows the method is effective.


2014 ◽  
Vol 530-531 ◽  
pp. 540-543 ◽  
Author(s):  
Qing Yi Liu

The random noise is the kind of noise with wide frequency band in seismic data detected by the optical acceleration sensors. The noises influence and destroy the useful signal of the seismic information. There are a lot of methods to remove noise and one of the standard methods to remove the noise of the signal was the fast Fourier transform (FFT) which was the linear Fourier smoothing. In this paper, the novel denoising method based on wavelet analysis was introduced. The denoising results of seismic data with the noise with FFT method and wavelet analysis method, respectively. SNRs of the signal with noise, FFT denoisng and wavelet analysis denoising are-8.69, -1.13, and 8.27 respectively. The results show that the wavelet analysis method is prior to the traditional denoising method. The resolution of the seismic data improves.


Geophysics ◽  
2020 ◽  
Vol 85 (1) ◽  
pp. V99-V118
Author(s):  
Yi Lin ◽  
Jinhai Zhang

Random noise attenuation plays an important role in seismic data processing. Most traditional methods suppress random noise either in the time-space domain or in the transformed domain, which may encounter difficulty in retaining the detailed structures. We have introduced the progressive denoising method to suppress random noise in seismic data. This method estimates random noise at each sample independently by imposing proper constraints on local windowed data in the time-space domain and then in the transformed domain, and the denoised results of the whole data set are gradually improved by many iterations. First, we apply an unnormalized bilateral kernel in time-space domain to reject large-amplitude signals; then, we apply a range kernel in the frequency-wavenumber domain to reject medium-amplitude signals; finally, we can obtain a total estimate of random noise by repeating these steps approximately 30 times. Numerical examples indicate that the progressive denoising method can achieve a better denoising result, compared with the two typical single-domain methods: the [Formula: see text]-[Formula: see text] deconvolution method and the curvelet domain thresholding method. As an edge-preserving method, the progressive denoising method can greatly reduce the random noise without harming the useful signals, especially to those high-frequency components, which would be crucial for high-resolution imaging and interpretations in the following stages.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lei Hao ◽  
Shuai Cao ◽  
Pengfei Zhou ◽  
Lei Chen ◽  
Yi Zhang ◽  
...  

In view of the key problem that a large amount of noise in seismic data can easily induce false anomalies and interpretation errors in seismic exploration, the time-frequency spectrum subtraction (TF-SS) method is adopted into data processing to reduce random noise in seismic data. On this basis, the main frequency information of seismic data is calculated and used to optimize the filtering coefficients. According to the characteristics of effective signal duration between seismic data and voice data, the time-frequency spectrum selection method and filtering coefficient are modified. In addition, simulation tests were conducted by using different S/R, which indicates the effectiveness of the TF-SS in removing the random noise.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. V81-V91 ◽  
Author(s):  
Yangkang Chen ◽  
Jitao Ma

Random noise attenuation always played an important role in seismic data processing. One of the most widely used methods for suppressing random noise was [Formula: see text] predictive filtering. When the subsurface structure becomes complex, this method suffered from higher prediction errors owing to the large number of different dip components that need to be predicted. We developed a novel denoising method termed [Formula: see text] empirical-mode decomposition (EMD) predictive filtering. This new scheme solved the problem that makes [Formula: see text] EMD ineffective with complex seismic data. Also, by making the prediction more precise, the new scheme removed the limitation of conventional [Formula: see text] predictive filtering when dealing with multidip seismic profiles. In this new method, we first applied EMD to each frequency slice in the [Formula: see text] domain and obtained several intrinsic mode functions (IMFs). Then, an autoregressive model was applied to the sum of the first few IMFs, which contained the high-dip-angle components, to predict the useful steeper events. Finally, the predicted events were added to the sum of the remaining IMFs. This process improved the prediction precision by using an EMD-based dip filter to reduce the dip components before [Formula: see text] predictive filtering. Synthetic and real data sets demonstrated the performance of our proposed method in preserving more useful energy.


2013 ◽  
Vol 4 (1) ◽  
pp. 18-27
Author(s):  
Ira Melissa ◽  
Raymond S. Oetama

Data mining adalah analisis atau pengamatan terhadap kumpulan data yang besar dengan tujuan untuk menemukan hubungan tak terduga dan untuk meringkas data dengan cara yang lebih mudah dimengerti dan bermanfaat bagi pemilik data. Data mining merupakan proses inti dalam Knowledge Discovery in Database (KDD). Metode data mining digunakan untuk menganalisis data pembayaran kredit peminjam pembayaran kredit. Berdasarkan pola pembayaran kredit peminjam yang dihasilkan, dapat dilihat parameter-parameter kredit yang memiliki keterkaitan dan paling berpengaruh terhadap pembayaran angsuran kredit. Kata kunci—data mining, outlier, multikolonieritas, Anova


Author(s):  
Gary Smith

We live in an incredible period in history. The Computer Revolution may be even more life-changing than the Industrial Revolution. We can do things with computers that could never be done before, and computers can do things for us that could never be done before. But our love of computers should not cloud our thinking about their limitations. We are told that computers are smarter than humans and that data mining can identify previously unknown truths, or make discoveries that will revolutionize our lives. Our lives may well be changed, but not necessarily for the better. Computers are very good at discovering patterns, but are useless in judging whether the unearthed patterns are sensible because computers do not think the way humans think. We fear that super-intelligent machines will decide to protect themselves by enslaving or eliminating humans. But the real danger is not that computers are smarter than us, but that we think computers are smarter than us and, so, trust computers to make important decisions for us. The AI Delusion explains why we should not be intimidated into thinking that computers are infallible, that data-mining is knowledge discovery, and that black boxes should be trusted.


Sign in / Sign up

Export Citation Format

Share Document