scholarly journals Classical Estimation of the Index Spmk and Its Confidence Intervals for Power Lindley Distributed Quality Characteristics

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Ghadah Alomani ◽  
Refah Alotaibi ◽  
Sanku Dey ◽  
Mahendra Saha

The process capability index (PCI) has been introduced as a tool to aid in the assessment of process performance. Usually, conventional PCIs perform well under normally distributed quality characteristics. However, when these PCIs are employed to evaluate nonnormally distributed process, they often provide inaccurate results. In this article, in order to estimate the PCI Spmk when the process follows power Lindley distribution, first, seven classical methods of estimation, namely, maximum likelihood method of estimation, ordinary and weighted least squares methods of estimation, Cramèr–von Mises method of estimation, maximum product of spacings method of estimation, Anderson–Darling, and right-tail Anderson–Darling methods of estimation, are considered and the performance of these estimation methods based on their mean squared error is compared. Next, three bootstrap confidence intervals (BCIs) of the PCI Spmk, namely, standard bootstrap, percentile bootstrap, and bias-corrected percentile bootstrap, are considered and compared in terms of their average width, coverage probability, and relative coverage. Besides, a new cost-effective PCI, namely, Spmkc is introduced by incorporating tolerance cost function in the index Spmk. To evaluate the performance of the methods of estimation and BCIs, a simulation study is carried out. Simulation results showed that the maximum likelihood method of estimation performs better than their counterparts in terms of mean squared error, while bias-corrected percentile bootstrap provides smaller confidence length (width) and higher relative coverage than standard bootstrap and percentile bootstrap across sample sizes. Finally, two real data examples are provided to investigate the performance of the proposed procedures.

Author(s):  
Farrukh Jamal ◽  
Christophe Chesneau

In this paper, a new family of polyno-expo-trigonometric distributions is presented and investigated. A special case using the Weibull distribution, with three parameters, is considered as statistical model for lifetime data. The estimation of the parameters is performed with the maximum likelihood method. A numerical simulation study verifies that the bias and the mean squared error of the maximum likelihood estimators tend to zero as the sample size is increased. Three real life datasets are then analyzed. We show that our model has a good fit in comparison to the other well-known powerful models in the literature.


Author(s):  
Sofi Mudasir Ahad ◽  
Sheikh Parvaiz Ahmad ◽  
Sheikh Aasimeh Rehman

In this paper, Bayesian and non-Bayesian methods are used for parameter estimation of weighted Rayleigh (WR) distribution. Posterior distributions are derived under the assumption of informative and non-informative priors. The Bayes estimators and associated risks are obtained under different symmetric and asymmetric loss functions. Results are compared on the basis of posterior risk and mean square error using simulated and real life data sets. The study depicts that in order to estimate the scale parameter of the weighted Rayleigh distribution use of entropy loss function under Gumbel type II prior can be preferred. Also, Bayesian method of estimation having least values of mean squared error gives better results as compared to maximum likelihood method of estimation.


2013 ◽  
Vol 816-817 ◽  
pp. 493-496 ◽  
Author(s):  
Lin Xue ◽  
Hong Cun Zhai

Conventional methods for locating near-field sources generally suffer performance degradation when the assumption of uniform spatial Gaussian noise does not hold. In this paper study the scenario of non-uniform spatial Gaussian noise. First we construct the near-field signal model based on planar sensor array and derive the maximum likelihood method for obtaining the azimuth and distance of sound sources, then we proposed two fast algorithms-stepwise-concentrated maximum likelihood method(SML) and approximate maximum likelihood method(AML) to reduce the high computational complexity of maximum likelihood localization method. Simulation results show that the two proposed methods outperform conventional maximum likelihood method, with lower computational complexity and less mean squared error of both azimuth estimation and distance estimation.


Author(s):  
Nadia Hashim Al-Noor ◽  
Shurooq A.K. Al-Sultany

        In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson” and the “Expectation-Maximization” techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function in terms of their mean squared error values and integrated mean squared error values respectively.


Soil Research ◽  
2015 ◽  
Vol 53 (8) ◽  
pp. 907 ◽  
Author(s):  
David Clifford ◽  
Yi Guo

Given the wide variety of ways one can measure and record soil properties, it is not uncommon to have multiple overlapping predictive maps for a particular soil property. One is then faced with the challenge of choosing the best prediction at a particular point, either by selecting one of the maps, or by combining them together in some optimal manner. This question was recently examined in detail when Malone et al. (2014) compared four different methods for combining a digital soil mapping product with a disaggregation product based on legacy data. These authors also examined the issue of how to compute confidence intervals for the resulting map based on confidence intervals associated with the original input products. In this paper, we propose a new method to combine models called adaptive gating, which is inspired by the use of gating functions in mixture of experts, a machine learning approach to forming hierarchical classifiers. We compare it here with two standard approaches – inverse-variance weights and a regression based approach. One of the benefits of the adaptive gating approach is that it allows weights to vary based on covariate information or across geographic space. As such, this presents a method that explicitly takes full advantage of the spatial nature of the maps we are trying to blend. We also suggest a conservative method for combining confidence intervals. We show that the root mean-squared error of predictions from the adaptive gating approach is similar to that of other standard approaches under cross-validation. However under independent validation the adaptive gating approach works better than the alternatives and as such it warrants further study in other areas of application and further development to reduce its computational complexity.


Sign in / Sign up

Export Citation Format

Share Document