scholarly journals Estimation of the Tapered Gutenberg-Richter Distribution Parameters for Catalogs with Variable Completeness: An Application to the Atlantic Ridge Seismicity

2021 ◽  
Vol 11 (24) ◽  
pp. 12166
Author(s):  
Matteo Taroni ◽  
Jacopo Selva ◽  
Jiancang Zhuang

The use of the tapered Gutenberg-Richter distribution in earthquake source models is rapidly increasing, allowing overcoming the definition of a hard threshold for the maximum magnitude. Here, we expand the classical maximum likelihood estimation method for estimating the parameters of the tapered Gutenberg-Richter distribution, allowing the use of a variable through-time magnitude of completeness. Adopting a well-established technique based on asymptotic theory, we also estimate the uncertainties relative to the parameters. Differently from other estimation methods for catalogs with a variable completeness, available for example for the classical truncated Gutenberg-Richter distribution, our approach does not need the assumption on the distribution of the number of events (usually the Poisson distribution). We test the methodology checking the consistency of parameter estimations with synthetic catalogs generated with multiple completeness levels. Then, we analyze the Atlantic ridge seismicity, using the global centroid moment tensor catalog, finding that our method allows better constraining distribution parameters, allowing the use more data than estimations based on a single completeness level. This leads to a sharp decrease in the uncertainties associated with the parameter estimation, when compared with existing methods based on a single time-independent magnitude of completeness. This also allows analyzing subsets of events, to deepen data analysis. For example, separating normal and strike-slip events, we found that they have significantly different but well-constrained corner magnitudes. Instead, without distinguishing for focal mechanism and considering all the events in the catalog, we obtain an intermediate value that is relatively less constrained from data, with an open confidence region.

2020 ◽  
pp. 103-111
Author(s):  
Emad Abulrahman Mohammed Salih Al-Heety

Earthquakes occur on faults and create new faults. They also occur on  normal, reverse and strike-slip faults. The aim of this work is to suggest a new unified classification of Shallow depth earthquakes based on the faulting styles, and to characterize each class. The characterization criteria include the maximum magnitude, focal depth, b-constant value, return period and relations between magnitude, focal depth and dip of fault plane. Global Centroid Moment Tensor (GCMT) catalog is the source of the used data. This catalog covers the period from Jan.1976 to Dec. 2017. We selected only the shallow (depth less than 70kms) pure, normal, strike-slip and reverse earthquakes (magnitude ≥ 5) and excluded the oblique earthquakes. The majority of normal and strike-slip earthquakes occurred in the upper crust, while the reverse earthquakes occurred throughout the thickness of the crust. The main trend for the derived b-values for the three classes was: b normal fault>bstrike-slip fault>breverse fault.  The mean return period for the normal earthquake was longer than that of the strike-slip earthquakes, while the reverse earthquakes had the shortest period. The obtained results report the relationship between the magnitude and focal depth of the normal earthquakes. A negative significant correlation between the magnitude and dip class for the normal and reverse earthquakes is reported. Negative and positive correlation relations between the focal depth and dip class were recorded for normal and reverse earthquakes, respectively. The suggested classification of earthquakes provides significant information to understand seismicity, seismtectonics, and seismic hazard analysis.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 339 ◽  
Author(s):  
Yongsong Li ◽  
Zhengzhou Li ◽  
Kai Wei ◽  
Weiqi Xiong ◽  
Jiangpeng Yu ◽  
...  

Noise estimation for image sensor is a key technique in many image pre-processing applications such as blind de-noising. The existing noise estimation methods for additive white Gaussian noise (AWGN) and Poisson-Gaussian noise (PGN) may underestimate or overestimate the noise level in the situation of a heavy textured scene image. To cope with this problem, a novel homogenous block-based noise estimation method is proposed to calculate these noises in this paper. Initially, the noisy image is transformed into the map of local gray statistic entropy (LGSE), and the weakly textured image blocks can be selected with several biggest LGSE values in a descending order. Then, the Haar wavelet-based local median absolute deviation (HLMAD) is presented to compute the local variance of these selected homogenous blocks. After that, the noise parameters can be estimated accurately by applying the maximum likelihood estimation (MLE) to analyze the local mean and variance of selected blocks. Extensive experiments on synthesized noised images are induced and the experimental results show that the proposed method could not only more accurately estimate the noise of various scene images with different noise levels than the compared state-of-the-art methods, but also promote the performance of the blind de-noising algorithm.


2019 ◽  
Vol 16 (2) ◽  
pp. 0395
Author(s):  
Khaleel Et al.

This paper discusses reliability R of the (2+1) Cascade model of inverse Weibull distribution. Reliability is to be found when strength-stress distributed is inverse Weibull random variables with unknown scale parameter and known shape parameter. Six estimation methods (Maximum likelihood, Moment, Least Square, Weighted Least Square, Regression and Percentile) are used to estimate reliability. There is a comparison between six different estimation methods by the simulation study by MATLAB 2016, using two statistical criteria Mean square error and Mean Absolute Percentage Error, where it is found that best estimator between the six estimators is Maximum likelihood estimation method.


2014 ◽  
Vol 11 (1) ◽  
Author(s):  
Felix Nwobi ◽  
Chukwudi Ugomma

In this paper we study the different methods for estimation of the parameters of the Weibull distribution. These methods are compared in terms of their fits using the mean square error (MSE) and the Kolmogorov-Smirnov (KS) criteria to select the best method. Goodness-of-fit tests show that the Weibull distribution is a good fit to the squared returns series of weekly stock prices of Cornerstone Insurance PLC. Results show that the mean rank (MR) is the best method among the methods in the graphical and analytical procedures. Numerical simulation studies carried out show that the maximum likelihood estimation method (MLE) significantly outperformed other methods.


Author(s):  
Hisham Mohamed Almongy ◽  
Ehab M. Almetwally

This paper discussed robust estimation for point estimation of the shape and scale parameters for generalized exponential (GE) distribution using a complete dataset in the presence of various percentages of outliers. In the case of outliers, it is known that classical methods such as maximum likelihood estimation (MLE), least square (LS) and maximum product spacing (MPS) in case of outliers cannot reach the best estimator. To confirm this fact, these classical methods were applied to the data of this study and compared with non-classical estimation methods. The non-classical (Robust) methods such as least absolute deviations (LAD), and M-estimation (using M. Huber (MH) weight and M. Bisquare (MB) weight) had been introduced to obtain the best estimation method for the parameters of the GE distribution. The comparison was done numerically by using the Monte Carlo simulation study. The two real datasets application confirmed that the M-estimation method is very much suitable for estimating the GE parameters. We concluded that the M-estimation method using Huber object function is a suitable estimation method in estimating the parameters of the GE distribution for a complete dataset in the presence of various percentages of outliers.


2003 ◽  
Vol 33 (7) ◽  
pp. 1340-1347 ◽  
Author(s):  
Lianjun Zhang ◽  
Kevin C Packard ◽  
Chuangmin Liu

Four commonly used estimation methods were employed to fit the three-parameter Weibull and Johnson's SB distributions to the tree diameter distributions of natural pure and mixed red spruce (Picea rubens Sarg.) – balsam fir (Abies balsamea (L.) Mill.) stands, respectively, in northeastern North America. The results indicated that the Weibull and the Johnson's SB distributions were, in general, equally suitable for modeling the diameter frequency distributions of this forest type, but the relative performance directly depended on the estimation method used. In this study, the linear regression methods for Johnson's SB were found to give the lowest mean Reynolds' error indices. The conditional maximum likelihood for Johnson's SB and the maximum likelihood estimation for Weibull produced comparable results. However, moment- or mode-based methods were not well suited to the observed diameter distributions that were typically positively skewed, reverse-J, and mound shapes.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ehab M. Almetwally ◽  
Mohamed A. H. Sabry ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
Sh. A. M. Mubarak ◽  
...  

This paper introduces the new novel four-parameter Weibull distribution named as the Marshall–Olkin alpha power Weibull (MOAPW) distribution. Some statistical properties of the distribution are examined. Based on Type-I censored and Type-II censored samples, maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation for the MOAPW distribution parameters are discussed. Numerical analysis using real data sets and Monte Carlo simulation are accomplished to compare various estimation methods. This novel model’s supremacy upon some famous distributions is explained using two real data sets and it is shown that the MOAPW model can achieve better fits than other competitive distributions.


Author(s):  
Hisham Mohamed Almongy ◽  
Ehab Mohamed Almetwally ◽  
Amaal Elsayed Mubarak

In this paper, we introduce and study a new extension of Lomax distribution with four-parameter named as the Marshall–Olkin alpha power Lomax (MOAPL) distribution. Some statistical properties of this distribution are discussed. Maximum likelihood estimation (MLE), maximum product spacing (MPS) and least Square (LS) method for the MOAPL distribution parameters are discussed. A numerical study using real data analysis and Monte-Carlo simulation are performed to compare between different methods of estimation. Superiority of the new model over some well-known distributions are illustrated by physics and economics real data sets. The MOAPL model can produce better fits than some well-known distributions as Marshall–Olkin Lomax, alpha power Lomax, Lomax distribution, Marshall–Olkin alpha power exponential, Kumaraswamy-generalized Lomax, exponentiated  Lomax  and power Lomax.


Author(s):  
A. S. Ogunsanya ◽  
E. E. E. Akarawak ◽  
W. B. Yahya

In this paper, we compared different Parameter Estimation method of the two parameter Weibull-Rayleigh Distribution (W-RD) namely; Maximum Likelihood Estimation (MLE), Least Square Estimation method (LSE) and three methods of Quartile Estimators. Two of the quartile methods have been applied in literature, while the third method (Q1-M) is introduced in this work. The methods have been applied to simulate data. These methods of estimation were compared using Error, Mean Square Error and Total Deviation (TD) which is also known as Sum Absolute Error Estimate (SAEE). The analytical results show that the performances of all the parameter estimation methods were satisfactory with data set of Weibull-Rayleigh distribution while degree of accuracy is determined by the sample size. The proposed quartile (Q1-M) method has the least Total Deviation and MSE. In addition, the quartile methods perform better than MLE for the simulated data. In particular, the proposed quartile methods (Q1-M) have an added advantage of simplicity in usage than MLE methods.


Author(s):  
Alexander Robitzsch

The Rasch model is one of the most prominent item response models. In this article, different item parameter estimation methods for the Rasch model are compared through a simulation study. The type of ability distribution, the number of items, and sample sizes were varied. It is shown that variants of joint maximum likelihood estimation and conditional likelihood estimation are competitive to marginal maximum likelihood estimation. However, efficiency losses of limited-information estimation methods are only modest. It can be concluded that in empirical studies using the Rasch model, the impact of the choice of an estimation method with respect to item parameters is almost negligible for most estimation methods. Interestingly, this sheds a somewhat more positive light on old-fashioned joint maximum likelihood and limited information estimation methods.


Sign in / Sign up

Export Citation Format

Share Document