scholarly journals Evidential Estimation of an Uncertain Mixed Exponential Distribution under Progressive Censoring

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1106
Author(s):  
Kuang Zhou ◽  
Yimin Shi

In this paper, the evidential estimation method for the parameters of the mixed exponential distribution is considered when a sample is obtained from Type-II progressively censored data. Different from the traditional statistical inference methods for censored data from mixture models, here we consider a very general form where there is some uncertain information about the sub-class labels of units. The partially specified label information, as well as the censored data are represented in a united frame by mass functions within the theory of belief functions. Following that, the evidential likelihood function is derived based on the completely observed failures and the uncertain information included in the data. Then, the optimization method using the evidential expectation maximization algorithm (E2M) is introduced. A general form of the maximal likelihood estimates (MLEs) in the sense of the evidential likelihood, named maximal evidential likelihood estimates (MELEs), can be obtained. Finally, some Monte Carlo simulations are conducted. The results show that the proposed estimation method can incorporate more information than traditional EM algorithms, and this confirms the interest in using uncertain labels for the censored data from finite mixture models.

1998 ◽  
Vol 9 ◽  
pp. 167-217 ◽  
Author(s):  
A. Ruiz ◽  
P. E. Lopez-de-Teruel ◽  
M. C. Garrido

This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications.


2013 ◽  
Vol 753-755 ◽  
pp. 2887-2891
Author(s):  
Cheng Dong Wei ◽  
Huan Qi Wei ◽  
Fu Wang ◽  
Wen Jun Wu

Mixed exponential distribution is a very important statistical model in life data analysis. In this paper, we give Bayesian estimations of mixed exponential distribution with Type-Ⅰ censored data by using conjugate prior distribution based on square loss function. And we prove that the Bayesian estimations are admissible.


2008 ◽  
Vol 17 (1) ◽  
pp. 33-51 ◽  
Author(s):  
Jeroen K Vermunt

An extension of latent class (LC) and finite mixture models is described for the analysis of hierarchical data sets. As is typical in multilevel analysis, the dependence between lower-level units within higher-level units is dealt with by assuming that certain model parameters differ randomly across higher-level observations. One of the special cases is an LC model in which group-level differences in the logit of belonging to a particular LC are captured with continuous random effects. Other variants are obtained by including random effects in the model for the response variables rather than for the LCs. The variant that receives most attention in this article is an LC model with discrete random effects: higher-level units are clustered based on the likelihood of their members belonging to the various LCs. This yields a model with mixture distributions at two levels, namely at the group and the subject level. This model is illustrated with three rather different empirical examples. The appendix describes an adapted version of the expectation—maximization algorithm that can be used for maximum likelihood estimation, as well as providing setups for estimating the multilevel LC model with generally available software.


2019 ◽  
Vol 3 (2) ◽  
pp. 64
Author(s):  
Setyo Wira Rizki ◽  
Shantika Martha

This research conducts a case of the cancer patients in censored data using Bayesian methodology. There are three types of loss function in Bayesian estimation method such as squared error loss function (self), linear exponential loss function (lelf) and general entropy loss function (gelf). Pareto survival model is selected as presentation data. To construct a posterior distribution, framing a likelihood function of Pareto and a prior, requires the prior distribution. An exponential distribution is chosen as a prior that describes parameter character of the Pareto. The posterior distribution is used to discover estimators in three loss functions of Bayesian methods. There are estimators held down by Bayesian self , Bayesian lelf  and Bayesian gelf  which substance 3.79, 3.78 and 3.90 correspondingly. After getting those estimators, the hazard functions  ,  and  and survival functions   ,  and  can be determined. The result shows that all of survival values under Bayesian approaches are lower than the real survival value. It means the result is more trusted because as a prior, the parameter is defined more precisely than before. The hazard function confirmations a same shape in all approaches. The rates of hazard are decreasing along with survival values which show the same behavior. The curves are strictly dropping after first data. This occurrence because due to a heavy-tailed character of Pareto.  The result indicates that MSE of parameter estimation under the Bayesian self, lelf and gelf are 1.3x10-2, 1.2x10-2 and 0 respectively. The mse of survival estimation under the Bayesian self, lelf and gelf are 10-4, 1.1x10-4 and 3x10-5 individually. It concludes that the Bayesian gelf  is the best approximation.


Filomat ◽  
2019 ◽  
Vol 33 (15) ◽  
pp. 4753-4767
Author(s):  
Khalil Masmoudi ◽  
Afif Masmoudi

In this paper, we introduce finite mixture models with singular multivariate normal components. These models are useful when the observed data involves collinearities, that is when the covariance matrices are singular. They are also useful when the covariance matrices are ill-conditioned. In the latter case, the classical approaches may lead to numerical instabilities and give inaccurate estimations. Hence, an extension of the Expectation Maximization algorithm, with complete proof, is proposed to derive the maximum likelihood estimators and cluster the data instances for mixtures of singular multivariate normal distributions. The accuracy of the proposed algorithm is then demonstrated on the grounds of several numerical experiments. Finally, we discuss the application of the proposed distribution to financial asset returns modeling and portfolio selection.


Risks ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 115
Author(s):  
Despoina Makariou ◽  
Pauline Barrieu ◽  
George Tzougas

The key purpose of this paper is to present an alternative viewpoint for combining expert opinions based on finite mixture models. Moreover, we consider that the components of the mixture are not necessarily assumed to be from the same parametric family. This approach can enable the agent to make informed decisions about the uncertain quantity of interest in a flexible manner that accounts for multiple sources of heterogeneity involved in the opinions expressed by the experts in terms of the parametric family, the parameters of each component density, and also the mixing weights. Finally, the proposed models are employed for numerically computing quantile-based risk measures in a collective decision-making context.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1510
Author(s):  
Alaa H. Abdel-Hamid ◽  
Atef F. Hashem

In this article, the tampered failure rate model is used in partially accelerated life testing. A non-decreasing time function, often called a ‘‘time transformation function", is proposed to tamper the failure rate under design conditions. Different types of the proposed function, which have sufficient conditions in order to be accelerating functions, are investigated. A baseline failure rate of the exponential distribution is considered. Some point estimation methods, as well as approximate confidence intervals, for the parameters involved are discussed based on generalized progressively hybrid censored data. The determination of the optimal stress change time is discussed under two different criteria of optimality. A real dataset is employed to explain the theoretical outcomes discussed in this article. Finally, a Monte Carlo simulation study is carried out to examine the performance of the estimation methods and the optimality criteria.


Sign in / Sign up

Export Citation Format

Share Document