scholarly journals Monofractal or multifractal: a case study of spatial distribution of mining-induced seismic activity

1994 ◽  
Vol 1 (2/3) ◽  
pp. 182-190 ◽  
Author(s):  
M. Eneva

Abstract. Using finite data sets and limited size of study volumes may result in significant spurious effects when estimating the scaling properties of various physical processes. These effects are examined with an example featuring the spatial distribution of induced seismic activity in Creighton Mine (northern Ontario, Canada). The events studied in the present work occurred during a three-month period, March-May 1992, within a volume of approximate size 400 x 400 x 180 m3. Two sets of microearthquake locations are studied: Data Set 1 (14,338 events) and Data Set 2 (1654 events). Data Set 1 includes the more accurately located events and amounts to about 30 per cent of all recorded data. Data Set 2 represents a portion of the first data set that is formed by the most accurately located and the strongest microearthquakes. The spatial distribution of events in the two data sets is examined for scaling behaviour using the method of generalized correlation integrals featuring various moments q. From these, generalized correlation dimensions are estimated using the slope method. Similar estimates are made for randomly generated point sets using the same numbers of events and the same study volumes as for the real data. Uniform and monofractal random distributions are used for these simulations. In addition, samples from the real data are randomly extracted and the dimension spectra for these are examined as well. The spectra for the uniform and monofractal random generations show spurious multifractality due only to the use of finite numbers of data points and limited size of study volume. Comparing these with the spectra of dimensions for Data Set 1 and Data Set 2 allows us to estimate the bias likely to be present in the estimates for the real data. The strong multifractality suggested by the spectrum for Data Set 2 appears to be largely spurious; the spatial distribution, while different from uniform, could originate from a monofractal process. The spatial distribution of microearthquakes in Data Set 1 is either monofractal as well, or only weakly multifractal. In all similar studies, comparisons of result from real data and simulated point sets may help distinguish between genuine and artificial multifractality, without necessarily resorting to large number of data.

2020 ◽  
Vol 501 (1) ◽  
pp. 994-1001
Author(s):  
Suman Sarkar ◽  
Biswajit Pandey ◽  
Snehasish Bhattacharjee

ABSTRACT We use an information theoretic framework to analyse data from the Galaxy Zoo 2 project and study if there are any statistically significant correlations between the presence of bars in spiral galaxies and their environment. We measure the mutual information between the barredness of galaxies and their environments in a volume limited sample (Mr ≤ −21) and compare it with the same in data sets where (i) the bar/unbar classifications are randomized and (ii) the spatial distribution of galaxies are shuffled on different length scales. We assess the statistical significance of the differences in the mutual information using a t-test and find that both randomization of morphological classifications and shuffling of spatial distribution do not alter the mutual information in a statistically significant way. The non-zero mutual information between the barredness and environment arises due to the finite and discrete nature of the data set that can be entirely explained by mock Poisson distributions. We also separately compare the cumulative distribution functions of the barred and unbarred galaxies as a function of their local density. Using a Kolmogorov–Smirnov test, we find that the null hypothesis cannot be rejected even at $75{{\ \rm per\ cent}}$ confidence level. Our analysis indicates that environments do not play a significant role in the formation of a bar, which is largely determined by the internal processes of the host galaxy.


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. G15-G24 ◽  
Author(s):  
Pejman Shamsipour ◽  
Denis Marcotte ◽  
Michel Chouteau ◽  
Martine Rivest ◽  
Abderrezak Bouchedda

The flexibility of geostatistical inversions in geophysics is limited by the use of stationary covariances, which, implicitly and mostly for mathematical convenience, assumes statistical homogeneity of the studied field. For fields showing sharp contrasts due, for example, to faults or folds, an approach based on the use of nonstationary covariances for cokriging inversion was developed. The approach was tested on two synthetic cases and one real data set. Inversion results based on the nonstationary covariance were compared to the results from the stationary covariance for two synthetic models. The nonstationary covariance better recovered the known synthetic models. With the real data set, the nonstationary assumption resulted in a better match with the known surface geology.


2012 ◽  
Vol 82 (9) ◽  
pp. 1615-1629 ◽  
Author(s):  
Bhupendra Singh ◽  
Puneet Kumar Gupta

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Suleman Nasiru

The need to develop generalizations of existing statistical distributions to make them more flexible in modeling real data sets is vital in parametric statistical modeling and inference. Thus, this study develops a new class of distributions called the extended odd Fréchet family of distributions for modifying existing standard distributions. Two special models named the extended odd Fréchet Nadarajah-Haghighi and extended odd Fréchet Weibull distributions are proposed using the developed family. The densities and the hazard rate functions of the two special distributions exhibit different kinds of monotonic and nonmonotonic shapes. The maximum likelihood method is used to develop estimators for the parameters of the new class of distributions. The application of the special distributions is illustrated by means of a real data set. The results revealed that the special distributions developed from the new family can provide reasonable parametric fit to the given data set compared to other existing distributions.


2019 ◽  
Vol 11 (16) ◽  
pp. 1886 ◽  
Author(s):  
Xinghui Zhao ◽  
Na Chen ◽  
Weifu Li ◽  
Shen ◽  
Peng

Known as input in the Numerical Weather Prediction (NWP) models, Microwave Radiation Imager (MWRI) data have been widely distributed to the user community. With the development of remote sensing technology, improving the geolocation accuracy of MWRI data are required and the first step is to estimate the geolocation error accurately. However, the traditional method, such as the coastline inflection method (CIM), usually has the disadvantages of low accuracy and poor anti-noise ability. To overcome these limitations, this paper proposes a novel ℓ p iterative closest point coastline inflection method ( ℓ p -ICP CIM). It assumes that the field of views (FOVs) across the coastline can degenerate into a step function and employs an ℓ p ( 0 ≤ p < 1 ) sparse regularization optimization model to solve the coastline point. After estimating the coastline points, the ICP algorithm is employed to estimate the corresponding relationship between the estimated coastline points and the real coastline. Finally, the geolocation error can be defined as the distance between the estimated coastline point and the corresponding point on the true coastline. Experimental results on simulated and real data sets show the effectiveness of our method over CIM. The accuracy of the geolocation error estimated by ℓ p -ICP CIM is up to 0 . 1 pixel, in more than 90 % of cases. We also show that the distribution of brightness temperature near the coastline is more consistent with the real coastline and the average geolocation error is reduced by 63 % after geolocation error correction.


2005 ◽  
Vol 30 (4) ◽  
pp. 369-396 ◽  
Author(s):  
Eisuke Segawa

Multi-indicator growth models were formulated as special three-level hierarchical generalized linear models to analyze growth of a trait latent variable measured by ordinal items. Items are nested within a time-point, and time-points are nested within subject. These models are special because they include factor analytic structure. This model can analyze not only data with item- and time-level missing observations, but also data with time points freely specified over subjects. Furthermore, features useful for longitudinal analyses, “autoregressive error degree one” structure for the trait residuals and estimated time-scores, were included. The approach is Bayesian with Markov Chain and Monte Carlo, and the model is implemented in WinBUGS. They are illustrated with two simulated data sets and one real data set with planned missing items within a scale.


Geophysics ◽  
2015 ◽  
Vol 80 (2) ◽  
pp. H13-H22 ◽  
Author(s):  
Saulo S. Martins ◽  
Jandyr M. Travassos

Most of the data acquisition in ground-penetrating radar is done along fixed-offset profiles, in which velocity is known only at isolated points in the survey area, at the locations of variable offset gathers such as a common midpoint. We have constructed sparse, heavily aliased, variable offset gathers from several fixed-offset, collinear, profiles. We interpolated those gathers to produce properly sampled counterparts, thus pushing data beyond aliasing. The interpolation methodology estimated nonstationary, adaptive, filter coefficients at all trace locations, including at the missing traces’ corresponding positions, filled with zeroed traces. This is followed by an inversion problem that uses the previously estimated filter coefficients to insert the new, interpolated, traces between the original ones. We extended this two-step strategy to data interpolation by employing a device in which we used filter coefficients from a denser variable offset gather to interpolate the missing traces on a few independently constructed gathers. We applied the methodology on synthetic and real data sets, the latter acquired in the interior of the Antarctic continent. The variable-offset interpolated data opened the door to prestack processing, making feasible the production of a prestack time migrated section and a 2D velocity model for the entire profile. Notwithstanding, we have used a data set obtained in Antarctica; there is no reason the same methodology could not be used somewhere else.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 527-538 ◽  
Author(s):  
E. Crase ◽  
A. Pica ◽  
M. Noble ◽  
J. McDonald ◽  
A. Tarantola

Nonlinear elastic waveform inversion has advanced to the point where it is now possible to invert real multiple‐shot seismic data. The iterative gradient algorithm that we employ can readily accommodate robust minimization criteria which tend to handle many types of seismic noise (noise bursts, missing traces, etc.) better than the commonly used least‐squares minimization criteria. Although there are many robust criteria from which to choose, we have tested only a few. In particular, the Cauchy criterion and the hyperbolic secant criterion perform very well in both noise‐free and noise‐added inversions of numerical data. Although the real data set, which we invert using the sech criterion, is marine (pressure sources and receivers) and is very much dominated by unconverted P waves, we can, for the most part, resolve the short wavelengths of both P impedance and S impedance. The long wavelengths of velocity (the background) are assumed known. Because we are deriving nearly all impedance information from unconverted P waves in this inversion, data acquisition geometry must have sufficient multiplicity in subsurface coverage and a sufficient range of offsets, just as in amplitude‐versus‐offset (AVO) inversion. However, AVO analysis is implicitly contained in elastic waveform inversion algorithms as part of the elastic wave equation upon which the algorithms are based. Because the real‐data inversion is so large—over 230,000 unknowns (340,000 when density is included) and over 600,000 data values—most statistical analyses of parameter resolution are not feasible. We qualitatively verify the resolution of our results by inverting a numerical data set which has the same acquisition geometry and corresponding long wavelengths of velocity as the real data, but has semirandom perturbations in the short wavelengths of P and S impedance.


Sign in / Sign up

Export Citation Format

Share Document