scholarly journals Bayesian Inference for Finite Mixture Regression Model Based on Non-Iterative Algorithm

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 590
Author(s):  
Ang Shan ◽  
Fengkai Yang

Finite mixtures normal regression (FMNR) models are widely used to investigate the relationship between a response variable and a set of explanatory variables from several unknown latent homogeneous groups. However, the classical EM algorithm and Gibbs sampling to deal with this model have several weak points. In this paper, a non-iterative sampling algorithm for fitting FMNR model is proposed from a Bayesian perspective. The procedure can generate independently and identically distributed samples from the posterior distributions of the parameters and produce more reliable estimations than the EM algorithm and Gibbs sampling. Simulation studies are conducted to illustrate the performance of the algorithm with supporting results. Finally, a real data is analyzed to show the usefulness of the methodology.




2020 ◽  
Vol 15 ◽  
pp. 42-51
Author(s):  
Shou-Jen Chang-Chien ◽  
Wajid Ali ◽  
Miin-Shen Yang

Clustering is a method for analyzing grouped data. Circular data were well used in various applications, such as wind directions, departure directions of migrating birds or animals, etc. The expectation & maximization (EM) algorithm on mixtures of von Mises distributions is popularly used for clustering circular data. In general, the EM algorithm is sensitive to initials and not robust to outliers in which it is also necessary to give a number of clusters a priori. In this paper, we consider a learning-based schema for EM, and then propose a learning-based EM algorithm on mixtures of von Mises distributions for clustering grouped circular data. The proposed clustering method is without any initial and robust to outliers with automatically finding the number of clusters. Some numerical and real data sets are used to compare the proposed algorithm with existing methods. Experimental results and comparisons actually demonstrate these good aspects of effectiveness and superiority of the proposed learning-based EM algorithm.



2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Javeria Khaleeq ◽  
Muhammad Amanullah ◽  
Zahra Almaspoor

Dealing with the biological data, the skewed distribution is approximated by the Log-Normal Regression model (LNRM). Traditional estimation techniques for the LNRM are sensitive to unusual observations. These observations greatly affect the model analysis, which makes imprecise conclusions. To overcome this issue, we proposed to develop diagnostics measures based on local influence diagnostics to identify such curious observations in the LNRM under censoring. The proposed measures are derived by perturbing the case weight, response, and explanatory variables. Furthermore, we also consider the One-Step Newton-Raphson method and generalized cook’s distance. We study the Monte Carlo simulation and its application to real data to illustrate the developed approaches.



2016 ◽  
Vol 46 (3) ◽  
pp. 779-799 ◽  
Author(s):  
Cuihong Yin ◽  
X. Sheldon Lin

AbstractThe Erlang mixture model has been widely used in modeling insurance losses due to its desirable distributional properties. In this paper, we consider the problem of efficient estimation of the Erlang mixture model. We present a new thresholding penalty function and a corresponding EM algorithm to estimate model parameters and to determine the order of the mixture. Using simulation studies and a real data application, we demonstrate the efficiency of the EM algorithm.



2015 ◽  
Vol 45 (3) ◽  
pp. 729-758 ◽  
Author(s):  
Roel Verbelen ◽  
Lan Gong ◽  
Katrien Antonio ◽  
Andrei Badescu ◽  
Sheldon Lin

AbstractWe discuss how to fit mixtures of Erlangs to censored and truncated data by iteratively using the EM algorithm. Mixtures of Erlangs form a very versatile, yet analytically tractable, class of distributions making them suitable for loss modeling purposes. The effectiveness of the proposed algorithm is demonstrated on simulated data as well as real data sets.



2017 ◽  
Vol 40 (1) ◽  
pp. 45-64 ◽  
Author(s):  
Fatma Zehra Doğru ◽  
Olcay Arslan

In this study, we propose a robust mixture regression procedure based on the skew t distribution to model heavy-tailed and/or skewed errors in a mixture regression setting. Using the scale mixture representation of the skew  t distribution, we give an Expectation Maximization (EM) algorithm to compute the maximum likelihood (ML) estimates for the paramaters of interest. The performance of proposed estimators is demonstrated by a simulation study and a real data example.



2020 ◽  
Author(s):  
Mark Britten-Jones




Author(s):  
Moritz Berger ◽  
Gerhard Tutz

AbstractA flexible semiparametric class of models is introduced that offers an alternative to classical regression models for count data as the Poisson and Negative Binomial model, as well as to more general models accounting for excess zeros that are also based on fixed distributional assumptions. The model allows that the data itself determine the distribution of the response variable, but, in its basic form, uses a parametric term that specifies the effect of explanatory variables. In addition, an extended version is considered, in which the effects of covariates are specified nonparametrically. The proposed model and traditional models are compared in simulations and by utilizing several real data applications from the area of health and social science.



Sign in / Sign up

Export Citation Format

Share Document