A fractal‐based algorithm for detecting first arrivals on seismic traces

Geophysics ◽  
1996 ◽  
Vol 61 (4) ◽  
pp. 1095-1102 ◽  
Author(s):  
Fabio Boschetti ◽  
Mike D. Dentith ◽  
Ron D. List

A new algorithm is proposed for the automatic picking of seismic first arrivals that detects the presence of a signal by analyzing the variation in fractal dimension along the trace. The “divider‐method” is found to be the most suitable method for calculating the fractal dimension. A change in dimension is found to occur close to the transition from noise to signal plus noise, that is the first arrival. The nature of this change varies from trace to trace, but a detectable change is always found to occur. The algorithm has been tested on real data sets with varying S/N ratios and the results compared to those obtained using previously published algorithms. With an appropriate tuning of its parameters, the fractal‐based algorithm proved more accurate than all these other algorithms, especially in the presence of significant noise. The fractal method proved able to tolerate noise up to 80% of the average signal amplitude. However, the fractal‐based algorithm is considerably slower than the other methods and hence is intended for use only on data sets with low S/N ratios.

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1850
Author(s):  
Rashad A. R. Bantan ◽  
Farrukh Jamal ◽  
Christophe Chesneau ◽  
Mohammed Elgarhy

Unit distributions are commonly used in probability and statistics to describe useful quantities with values between 0 and 1, such as proportions, probabilities, and percentages. Some unit distributions are defined in a natural analytical manner, and the others are derived through the transformation of an existing distribution defined in a greater domain. In this article, we introduce the unit gamma/Gompertz distribution, founded on the inverse-exponential scheme and the gamma/Gompertz distribution. The gamma/Gompertz distribution is known to be a very flexible three-parameter lifetime distribution, and we aim to transpose this flexibility to the unit interval. First, we check this aspect with the analytical behavior of the primary functions. It is shown that the probability density function can be increasing, decreasing, “increasing-decreasing” and “decreasing-increasing”, with pliant asymmetric properties. On the other hand, the hazard rate function has monotonically increasing, decreasing, or constant shapes. We complete the theoretical part with some propositions on stochastic ordering, moments, quantiles, and the reliability coefficient. Practically, to estimate the model parameters from unit data, the maximum likelihood method is used. We present some simulation results to evaluate this method. Two applications using real data sets, one on trade shares and the other on flood levels, demonstrate the importance of the new model when compared to other unit models.


2014 ◽  
Vol 39 (2) ◽  
pp. 107-127 ◽  
Author(s):  
Artur Matyja ◽  
Krzysztof Siminski

Abstract The missing values are not uncommon in real data sets. The algorithms and methods used for the data analysis of complete data sets cannot always be applied to missing value data. In order to use the existing methods for complete data, the missing value data sets are preprocessed. The other solution to this problem is creation of new algorithms dedicated to missing value data sets. The objective of our research is to compare the preprocessing techniques and specialised algorithms and to find their most advantageous usage.


Author(s):  
Vo Thi Ngoc Chau ◽  
Nguyen Hua Phung

Educational data clustering on the students’ data collected with a program can find several groups of the students sharing the similar characteristics in their behaviors and study performance. For some programs, it is not trivial for us to prepare enough data for the clustering task. Data shortage might then influence the effectiveness of the clustering process and thus, true clusters can not be discovered appropriately. On the other hand, there are other programs that have been well examined with much larger data sets available for the task. Therefore, it is wondered if we can exploit the larger data sets from other source programs to enhance the educational data clustering task on the smaller data sets from the target program. Thanks to transfer learning techniques, a transfer-learning-based clustering method is defined with the kernel k-means and spectral feature alignment algorithms in our paper as a solution to the educational data clustering task in such a context. Moreover, our method is optimized within a weighted feature space so that how much contribution of the larger source data sets to the clustering process can be automatically determined. This ability is the novelty of our proposed transfer learning-based clustering solution as compared to those in the existing works. Experimental results on several real data sets have shown that our method consistently outperforms the other methods using many various approaches with both external and internal validations.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. V79-V89 ◽  
Author(s):  
Wail A. Mousa ◽  
Abdullatif A. Al-Shuhail ◽  
Ayman Al-Lehyani

We introduce a new method for first-arrival picking based on digital color-image segmentation of energy ratios of refracted seismic data. The method uses a new color-image segmentation scheme based on projection onto convex sets (POCS). The POCS requires a reference color for the first break and one iteration to segment the first-break amplitudes from other arrivals. We tested the segmentation method on synthetic seismic data sets with various amounts of additive Gaussian noise. The proposed method gives similar performance to a modified version of Coppens’ method for traces with high signal-to-noise ratio and medium-to-large offsets. Finally, we applied our method and used as well the modified first-arrival picking method based on Coppens’ method to pick the first arrivals on four real data sets, where both were compared to the first breaks that were picked manually and then interpolated. Based on an assessment error of a 20-ms window with respect to manual picks that are interpolated, we find that our method gives comparable performance to Coppens’ method, depending on the data difficulty of picking first arrivals. Therefore, we believe that our proposed method is a good new addition to the existing methods of first-arrival picking.


2015 ◽  
Vol 24 (03) ◽  
pp. 1550003 ◽  
Author(s):  
Armin Daneshpazhouh ◽  
Ashkan Sami

The task of semi-supervised outlier detection is to find the instances that are exceptional from other data, using some labeled examples. In many applications such as fraud detection and intrusion detection, this issue becomes more important. Most existing techniques are unsupervised. On the other hand, semi-supervised approaches use both negative and positive instances to detect outliers. However, in many real world applications, very few positive labeled examples are available. This paper proposes an innovative approach to address this problem. The proposed method works as follows. First, some reliable negative instances are extracted by a kNN-based algorithm. Afterwards, fuzzy clustering using both negative and positive examples is utilized to detect outliers. Experimental results on real data sets demonstrate that the proposed approach outperforms the previous unsupervised state-of-the-art methods in detecting outliers.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V67-V76 ◽  
Author(s):  
Juan I. Sabbione ◽  
Danilo Velis

We have developed three methods for the automatic picking of first breaks that can be used for marine, dynamite, or vibroseis shot records: a modified Coppens’s method, an entropy-based method, and a variogram fractal-dimension method. The techniques are based on the fact that the transition between noise and noise plus signal can be automatically identified by detecting rapid changes in a certain attribute (energy ratio, entropy, or fractal dimension), which we calculate within moving windows along the seismic trace. The application of appropriate edge-preserving smoothing operators to enhance these transitions allowed us to develop an automated strategy that can be used to easily signal the precise location of the first-arrival onset. Furthermore, we propose a mispick-correcting technique to exploit the benefits of the data present in the entire shot record, which allows us to adjust the trace-by-trace picks and to discard picks associated with bad or dead traces. As a result, the consistency of the first-break picks is significantly improved. The methods are robust under noisy conditions, computationally efficient, and easy to apply. Results using dynamite and vibroseis field data show that accurate and consistent picks can be obtained in an automated manner even under the presence of correlated noise, bad traces, pulse changes, and indistinct first breaks.


Author(s):  
Mohamed Ibrahim Mohamed

In this work, we introduce a new extension of the Fréchet distribution. A sufficient set of the mathematical and statistical properties have been derived. The estimation of the parameters is carried out by considering the different method of estimation. The performances of the proposed estimation methods are studied by Monte Carlo simulations. The potentiality of the proposed model has been analyzed through two data sets. The weighted least square method is the best method for modelling breaking stress data, the least square method is the best method for modelling strengths data, however all other methods performed well for both data sets. On the other hand, the new model gives the best …ts among all other …fitted extensions of the Fréchet models to these data. So, it could be chosen as the best model for modeling breaking stress and strengths real data.


2018 ◽  
Vol 16 (9) ◽  
pp. 655-668
Author(s):  
Sirinapa ARYUYUEN ◽  
Winai BODHISUWAN

A new truncated distribution, called the truncated power Lomax (TPL) distribution, is proposed. This is a truncated version of the power Lomax distribution. The TPL distribution has increasing and decreasing shapes of the hazard function. Some statistical properties, such as moments, survival, hazard, and quantile functions, are discussed. The maximum likelihood estimation (MLE) is constructed for estimating the unknown parameters of the TPL distribution. Moreover, the distribution has been fitted with real data sets to illustrate the usefulness of the proposed distribution. From the results of the example applications, the TPL distribution provides a consistently better fit than the other distributions, i.e., power Lomax and Lomax.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document