mixing distribution
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 14)

H-INDEX

17
(FIVE YEARS 1)

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3069
Author(s):  
Emilio Gómez-Déniz ◽  
Yuri A. Iriarte ◽  
Yolanda M. Gómez ◽  
Inmaculada Barranco-Chamorro ◽  
Héctor W. Gómez

In this paper, a modified exponentiated family of distributions is introduced. The new model was built from a continuous parent cumulative distribution function and depends on a shape parameter. Its most relevant characteristics have been obtained: the probability density function, quantile function, moments, stochastic ordering, Poisson mixture with our proposal as the mixing distribution, order statistics, tail behavior and estimates of parameters. We highlight the particular model based on the classical exponential distribution, which is an alternative to the exponentiated exponential, gamma and Weibull. A simulation study and a real application are presented. It is shown that the proposed family of distributions is of interest to applied areas, such as economics, reliability and finances.


Author(s):  
Chenghao Shan ◽  
Weidong Zhou ◽  
Yefeng Yang ◽  
Hanyu Shan

A new robust Kalman filter (KF) based on mixing distribution is presented to address the filtering issue for a linear system with measurement loss (ML) and heavy-tailed measurement noise (HTMN) in this paper. A new Student’s t-inverse-Wishart-Gamma mixing distribution is derived to more rationally model the HTMN. By employing a discrete Bernoulli random variable (DBRV), the form of measurement likelihood function of double mixing distributions is converted from a weighted sum to an exponential product, and a hierarchical Gaussian state-space model (HGSSM) is therefore established. Finally, the system state, the intermediate random variables (IRVs) of the new STIWG distribution, and the DBRV are simultaneously estimated by utilizing the variational Bayesian (VB) method. Numerical example simulation experiment indicates that the proposed filter in this paper has superior performance than current algorithms in processing ML and HTMN.


Author(s):  
Navin Kashyap ◽  
Manjunath Krishnapur

Abstract We show, by an explicit construction, that a mixture of univariate Gaussian densities with variance $1$ and means in $[-A,A]$ can have $\varOmega (A^2)$ modes. This disproves a recent conjecture of Dytso et al. (2020, IEEE Trans. Inf. Theory, 66, 2006–2022) who showed that such a mixture can have at most $O(A^{2})$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in ${\mathbb{R}}^{d}$, with identity covariances and means inside ${[-A,A]}^{d}$, that has $\varOmega (A^{2d})$ modes.


Author(s):  
A Meytrianti ◽  
S Nurrohmah ◽  
M Novita

Poisson distribution is a common distribution for modelling count data with assumption mean and variance has the same value (equidispersion). In fact, most of the count data have mean that is smaller than variance (overdispersion) and Poisson distribution cannot be used for modelling this kind of data. Thus, several alternative distributions have been introduced to solve this problem. One of them is Shanker distribution that only has one parameter. Since Shanker distribution is continuous distribution, it cannot be used for modelling count data. Therefore, a new distribution is offered that is Poisson-Shanker distribution. Poisson-Shanker distribution is obtained by mixing Poisson and Shanker distribution, with Shanker distribution as the mixing distribution. The result is a mixture distribution that has one parameter and can be used for modelling overdispersion count data. In this paper, we obtain that Poisson-Shanker distribution has several properties are unimodal, overdispersion, increasing hazard rate, and right skew. The first four raw moments and central moments have been obtained. Maximum likelihood is a method that is used to estimate the parameter, and the solution can be done using numerical iterations. A real data set is used to illustrate the proposed distribution. The characteristics of the Poisson-Shanker distribution parameter is also obtained by numerical simulation with several variations in parameter values and sample size. The result is MSE and bias of the estimated parameter theta will increase when the parameter value rises for a value of n and will decrease when the value of n rises for a parameter value.


2021 ◽  
Vol 11 (06) ◽  
pp. 977-992
Author(s):  
Calvin B. Maina ◽  
Patrick G. O. Weke ◽  
Carolyne A. Ogutu ◽  
Joseph A. M. Ottieno

2021 ◽  
Vol 11 (06) ◽  
pp. 963-976
Author(s):  
Calvin B. Maina ◽  
Patrick G. O. Weke ◽  
Carolyne A. Ogutu ◽  
Joseph A. M. Ottieno

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Anthony M. Orlando ◽  
Rahul Dhanda

It is interesting to note that the expected value of the log likelihood function is entropy. This note shows that there is an exact relationship between the mixture log likelihood function (ln LM) and the sum of the mixing distribution entropy (HM) and the mixture density entropy (HD). Ln LM is seen as a function exactly of four Shannon entropies, each a unique measure of uncertainty. This method, known as mixtures of linear models (MLM), is a form of empirical Bayes which uses a non-informative uniform prior and generates both confidence intervals and p-values which clinicians and regulatory agencies can use to evaluate scientific evidence. An example based on allergic rhinitis symptoms scores are given and show how easy it is to assess the fit of the model and evaluate the results of the trial.


2020 ◽  
Vol 4 ◽  
pp. 33-42
Author(s):  
Binod Kumar Sah ◽  
A. Mishra

Background: A mixture distribution arises when some or all parameters in a mixing distribution vary according to the nature of original distribution. A generalised exponential-Lindley distribution (GELD) was obtained by Mishra and Sah (2015). In this paper, generalized exponential- Lindley mixture of generalised Poisson distribution (GELMGPD) has been obtained by mixing generalised Poisson distribution (GPD) of Consul and Jain’s (1973) with GELD. In the proposed distribution, GELD is the original distribution and GPD is a mixing distribution. Generalised exponential- Lindley mixture of Poisson distribution (GELMPD) was obtained by Sah and Mishra (2019). It is a particular case of GELMGPD. Materials and Methods: GELMGPD is a compound distribution obtained by using the theoretical concept of some continuous mixtures of generalised Poisson distribution of Consul and Jain (1973). In this mixing process, GELD plays a role of original distribution and GPD is considered as mixing distribution. Results: Probability mass of function (pmf) and the first four moments about origin of the generalised exponential-Lindley mixture of generalised Poisson distribution have been obtained. The method of moments has been discussed to estimate parameters of the GELMGPD. This distribution has been fitted to a number of discrete data-sets which are negative binomial in nature. P-value of this distribution has been compared to the PLD of Sankaran (1970) and GELMPD of Sah and Mishra (2019) for similar type of data-sets. Conclusion: It is found that P-value of GELMGPD is greater than that in each case of PLD and GELMPD. Hence, it is expected to be a better alternative to the PLD of Sankaran and GELMPD of Sah and Mishra for similar types of discrete data-sets which are negative binomial in nature. It is also observed that GELMGPD gives much more significant result when the value of is negative.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2159
Author(s):  
Francisco-José Vázquez-Polo ◽  
Miguel-Ángel Negrín-Hernández ◽  
María Martel-Escobar

In meta-analysis, the existence of between-sample heterogeneity introduces model uncertainty, which must be incorporated into the inference. We argue that an alternative way to measure this heterogeneity is by clustering the samples and then determining the posterior probability of the cluster models. The meta-inference is obtained as a mixture of all the meta-inferences for the cluster models, where the mixing distribution is the posterior model probabilities. When there are few studies, the number of cluster configurations is manageable, and the meta-inferences can be drawn with BMA techniques. Although this topic has been relatively neglected in the meta-analysis literature, the inference thus obtained accurately reflects the cluster structure of the samples used. In this paper, illustrative examples are given and analysed, using real binary data.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 974
Author(s):  
Małgorzata Łazęcka ◽  
Jan Mielniczuk

We consider a nonparametric Generative Tree Model and discuss a problem of selecting active predictors for the response in such scenario. We investigated two popular information-based selection criteria: Conditional Infomax Feature Extraction (CIFE) and Joint Mutual information (JMI), which are both derived as approximations of Conditional Mutual Information (CMI) criterion. We show that both criteria CIFE and JMI may exhibit different behavior from CMI, resulting in different orders in which predictors are chosen in variable selection process. Explicit formulae for CMI and its two approximations in the generative tree model are obtained. As a byproduct, we establish expressions for an entropy of a multivariate gaussian mixture and its mutual information with mixing distribution.


Sign in / Sign up

Export Citation Format

Share Document