Edge-preserving polynomial fitting method to suppress random seismic noise

Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. V69-V73 ◽  
Author(s):  
Yan-hong Lu ◽  
Wen-kai Lu

This paper focuses on suppressing random seismic noise while preserving signals and edges. We propose an edge-preserving polynomial fitting (EPPF) method leading to good signal and edge preservation. The EPPF method assumes that a 1D signal can be modeled by a polynomial. A series of shifted windows are used to estimate any sample in a 1D signal. After that, the window with the minimum fitting error is selected and its output is assigned as the final estimate for this sample. For a point in 2D seismic data, several 1D signals are extracted along different directions first and then are processed by the EPPF method. After that, we select the direction with a minimum fitting error and assign its output as the final estimate for this point. Applications with synthetic and real data sets show that the EPPF method suppresses the random seismic noise effectively while preserving the signals and edges. Comparisons of results obtained by the EPPF method, the edge-preserving smoothing (EPS) method, and the polynomial fitting (PF) method show that the EPPF method outperforms EPS and PF methods in these tests.

2021 ◽  
Vol 105 ◽  
pp. 90-98
Author(s):  
Xiao Yu Jiang ◽  
Qing Ya Wang ◽  
Mu Qiang Xu ◽  
Jun Hao

An iterative polynomial fitting method is proposed for the estimate of the baseline of the X-ray fluorescence spectrum signal. The new method generates automatic thresholds by comparing the X-ray fluorescence spectrum signal with the calculated signal from polynomial fitting in the iterative processes. The signal peaks are cut out consecutively in the iterative processes so the polynomial fitting will finally give a good estimation of the baseline. Simulated data and real data from the soil analysis spectrum are used to demonstrate the feasibility of the proposed method.


2018 ◽  
Vol 10 (10) ◽  
pp. 1559
Author(s):  
Xin Tian ◽  
Mi Jiang ◽  
Ruya Xiao ◽  
Rakesh Malhotra

The adaptive Goldstein filter driven by InSAR coherence is one of the most famous frequency domain-based filters and has been widely used to improve the quality of InSAR measurement with different noise features. However, the filtering power is biased to varying degrees due to the biased coherence estimator and empirical modelling of the filtering power under a given coherence level. This leads to under- or over-estimation of phase noise over the entire dataset. Here, the authors present a method to correct filtering power on the basis of the second kind statistical coherence estimator. In contrast with regular statistics, the new estimator has smaller bias and variance values, and therefore provides more accurate coherence observations. In addition, a piece-wise function model determined from the Monte Carlo simulation is used to compensate for the nonlinear relationship between the filtering parameter and coherence. This method was tested on both synthetic and real data sets and the results were compared against those derived from other state-of-the-art filters. The better performance of the new filter for edge preservation and residue reduction demonstrates the value of this method.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. V293-V303 ◽  
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

We have developed new algorithms for denoising 2D or 3D poststack seismic-amplitude data that use simple edge-preserving smoothing operators in the frequency-offset domain. The algorithms are aimed to attenuate random and coherent noise, to enhance the signal energy and lateral continuity, and to preserve structural discontinuities such as faults. The methods consist of fitting the frequency slices of the data in the spatial dimension by means of low-order polynomials. We use an overlapping window operator to select the fitting parameters for each point of the slice from the neighborhood with minimum fitting error to provide edge preservation. Various synthetic examples and a field data set are used to demonstrate the strengths and limitations of the algorithms. The denoised outputs indicate enhanced edge preservation of seismic features, which reflects clearer details of semblance attributes.


2020 ◽  
Vol 224 (3) ◽  
pp. 1705-1723
Author(s):  
A Lois ◽  
F Kopsaftopoulos ◽  
D Giannopoulos ◽  
K Polychronopoulou ◽  
N Martakis

SUMMARY In this paper, we propose a two-step procedure for the automated detection of micro-earthquakes, using single-station, three-component passive seismic data. The first step consists of the computation of an appropriate characteristic function, along with an energy-based thresholding scheme, in order to attain an initial discrimination of the seismic noise from the ‘useful’ information. The three-component data matrix is factorized via the singular value decomposition by means of a properly selected moving window and for each step of the windowing procedure a diagonal matrix containing the estimated singular values is formed. The ${L_2}$-norm of the singular values resulting from the above-mentioned windowing process defines the time series which serves as a characteristic function. The extraction of the seismic signals from the initial record is achieved by following a histogram-based thresholding scheme. The histogram of the characteristic function, which constitutes its empirical probability density function, is estimated and the optimum threshold value is chosen corresponds to the bin that separates the above-mentioned histogram in two different areas delineating the background noise and the outliers. Since detection algorithms often suffer from false alarms, which increase in extremely noisy environments, as a second stage, we propose a new ‘decision-making’ scenario to be applied on the extracted intervals, for the purpose of decreasing the probability of false alarms. In this context, we propose a methodology, based on comparing among autoregressive models estimated both on isolated seismic noise, in addition to the detections resulting from the first stage. The performance and efficiency of the proposed technique is supported by its application to a series of experiments that were based on both synthetic and real data sets. In particular, we investigate the effectiveness of the characteristic function, along with the thresholding scheme by subjecting them to noise robustness tests using synthetic seismic noise, with different statistical characteristics and at noise levels varying from 5 down to –5 dB. Results are compared with those obtained by the implementation of a three-component version of the well-known STA/LTA algorithm to the same data set. Moreover, the proposed technique and its potential to distinguish seismic noise from the useful information through the proposed decision making scheme is evaluated, by its application to real data sets, acquired by three-component short-period recorders that were installed for monitoring the microseismic activity in areas characterized by different noise attributes.


Author(s):  
S. Vitale ◽  
G. Ferraioli ◽  
V. Pascazio

Abstract. SAR despeckling is a key tool for Earth Observation. Interpretation of SAR images are impaired by speckle, a multiplicative noise related to interference of backscattering from the illuminated scene towards the sensor. Reducing the noise is a crucial task for the understanding of the scene. Based on the results of our previous solution KL-DNN, in this work we define a new cost function for training a convolutional neural network for despeckling. The aim is to control the edge preservation and to better filter man-made structures and urban areas that are very challenging for KL-DNN. The results show a very good improvement on the not homogeneous areas keeping the good results in the homogeneous ones. Result on both simulated and real data are shown in the paper.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 187
Author(s):  
Marcelo A. Soto ◽  
Alin Jderu ◽  
Dorel Dorobantu ◽  
Marius Enachescu ◽  
Dominik Ziegler

A high-order polynomial fitting method is proposed to accelerate the computation of double-Gaussian fitting in the retrieval of the Brillouin frequency shifts (BFS) in optical fibers showing two local Brillouin peaks. The method is experimentally validated in a distributed Brillouin sensor under different signal-to noise ratios and realistic spectral scenarios. Results verify that a sixth-order polynomial fitting can provide a reliable initial estimation of the dual local BFS values, which can be subsequently used as initial parameters of a nonlinear double-Gaussian fitting. The method demonstrates a 4.9-fold reduction in the number of iterations required by double-Gaussian fitting and a 3.4-fold improvement in processing time.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Sign in / Sign up

Export Citation Format

Share Document