An Extended log-Lindley-G Family: Properties and Experiments in Repairable Data

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3108
Author(s):  
Ahmed M. T. Abd El-Bar ◽  
Willams B. F. da Silva ◽  
Abraão D. C. Nascimento

In this article, two new families of distributions are proposed: the generalized log-Lindley-G (GLL-G) and its counterpart, the GLL*-G. These families can be justified by their relation to the log-Lindley model, an important assumption for describing social and economic phenomena. Specific GLL models are introduced and studied. We show that the GLL density is rewritten as a two-member linear combination of the exponentiated G-densities and that, consequently, many of its mathematical properties arise directly, such as moment-based expressions. A maximum likelihood estimation procedure for the GLL parameters is provided and the behavior of the resulting estimates is evaluated by Monte Carlo experiments. An application to repairable data is made. The results argue for the use of the exponential law as the basis for the GLL-G family.

2010 ◽  
Vol 26 (6) ◽  
pp. 1846-1854 ◽  
Author(s):  
Mogens Fosgerau ◽  
Søren Feodor Nielsen

In many stated choice experiments researchers observe the random variablesVt,Xt, andYt= 1{U+δ⊤Xt+ εt<Vt},t≤T, whereδis an unknown parameter andUand εtare unobservable random variables. We show that under weak assumptions the distributions ofUand εtand also the unknown parameterδcan be consistently estimated using a sieved maximum likelihood estimation procedure.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Fan Yang ◽  
Hu Ren ◽  
Zhili Hu

The maximum likelihood estimation is a widely used approach to the parameter estimation. However, the conventional algorithm makes the estimation procedure of three-parameter Weibull distribution difficult. Therefore, this paper proposes an evolutionary strategy to explore the good solutions based on the maximum likelihood method. The maximizing process of likelihood function is converted to an optimization problem. The evolutionary algorithm is employed to obtain the optimal parameters for the likelihood function. Examples are presented to demonstrate the proposed method. The results show that the proposed method is suitable for the parameter estimation of the three-parameter Weibull distribution.


Author(s):  
RS Sinha ◽  
AK Mukhopadhyay

The primary crusher is essential equipment employed for comminuting the mineral in processing plants. Any kind of failure of its components will accordingly hinder the performance of the plant. Therefore, to minimize sudden failures, analysis should be undertaken to improve performance and operational reliability of the crushers and its components. This paper considers the methods for analyzing failure rates of a jaw crusher and its critical components application of a two-parameter Weibull distribution in a mineral processing plant fitted using statistical tests such as goodness of fit and maximum likelihood estimation. Monte Carlo simulation, analysis of variance, and artificial neural network are also applied. Two-parameter Weibull distribution is found to be the best fit distribution using Kolmogorov–Smirnov test. Maximum likelihood estimation method is used to find out the shape and scale parameter of two-parameter Weibull distribution. Monte Carlo simulation generates 40 numbers of shape parameters, scale parameters, and time. Further, 40 numbers of Weibull distribution parameters are evaluated to examine the failure rate, significant difference, and regression coefficient using ANOVA. Artificial neural network with back-propagation algorithm is used to determine R2 and is compared with analysis of variance.


1999 ◽  
Vol 11 (7) ◽  
pp. 1739-1768 ◽  
Author(s):  
Aapo Hyvärinen

Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. In this article, we show how sparse coding can be used for denoising. Using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise, we show how to apply a soft-thresholding (shrinkage) operator on the components of sparse coding so as to reduce noise. Our method is closely related to the method of wavelet shrinkage, but it has the important benefit over wavelet methods that the representation is determined solely by the statistical properties of the data. The wavelet representation, on the other hand, relies heavily on certain mathematical properties (like self-similarity) that may be only weakly related to the properties of natural data.


2020 ◽  
Vol 224 (1) ◽  
pp. 337-339
Author(s):  
Matteo Taroni

SUMMARY In this short paper we show how to use the classical maximum likelihood estimation procedure for the b-value of the Gutenberg–Richter law for catalogues with different levels of completeness. With a simple correction, that is subtracting the relative completeness level to each magnitude, it becomes possible to use the classical approach. Moreover, this correction allows to adopt the testing procedures, initially made for catalogues with a single level of completeness, for catalogues with different levels of completeness too.


2020 ◽  
Vol 68 (6) ◽  
pp. 1896-1912
Author(s):  
Yijie Peng ◽  
Michael C. Fu ◽  
Bernd Heidergott ◽  
Henry Lam

A Simulation-Based Approach for Calibrating Stochastic Models


Sign in / Sign up

Export Citation Format

Share Document