scholarly journals MCMC joint separation for the hidden Markov fields with particles

2020 ◽  
Author(s):  
Kevin Williams ◽  
Warren Washer ◽  
Brian Rees ◽  
Agustin Lott

In this contribution, we consider the problem of the blind separation of noisy instantaneously mixed images. The images are modelized by hidden Markov fields with unknown parameters. Given the observed images, we give a Bayesian formulation and we propose to solve the resulting data augmentation problem by implementing a Monte Carlo Markov Chaîn (MCMC) procedure. We separate the unknown variables into two categories: \\$1$. The parameters of interest which are the mixing matrix, the noise covariance and the parameters of the sources distributions.\\$2$. The hidden variables which are the unobserved sources and the unobserved pixels classification labels.The proposed algorithm provides in the stationary regime samples drawn from the posterior distributions of all the variables involved in the problem leading to a flexibility in the cost function choice.We discuss and characterize some problems of non identifiability and degeneracies of the parameters likelihood and the behavior of the MCMC algorithm in this case. Finally, we show the results for both synthetic and real data to illustrate the feasibility of the proposed solution.

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


2020 ◽  
Vol 70 (4) ◽  
pp. 953-978
Author(s):  
Mustafa Ç. Korkmaz ◽  
G. G. Hamedani

AbstractThis paper proposes a new extended Lindley distribution, which has a more flexible density and hazard rate shapes than the Lindley and Power Lindley distributions, based on the mixture distribution structure in order to model with new distribution characteristics real data phenomena. Its some distributional properties such as the shapes, moments, quantile function, Bonferonni and Lorenz curves, mean deviations and order statistics have been obtained. Characterizations based on two truncated moments, conditional expectation as well as in terms of the hazard function are presented. Different estimation procedures have been employed to estimate the unknown parameters and their performances are compared via Monte Carlo simulations. The flexibility and importance of the proposed model are illustrated by two real data sets.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 934
Author(s):  
Yuxuan Zhang ◽  
Kaiwei Liu ◽  
Wenhao Gui

For the purpose of improving the statistical efficiency of estimators in life-testing experiments, generalized Type-I hybrid censoring has lately been implemented by guaranteeing that experiments only terminate after a certain number of failures appear. With the wide applications of bathtub-shaped distribution in engineering areas and the recently introduced generalized Type-I hybrid censoring scheme, considering that there is no work coalescing this certain type of censoring model with a bathtub-shaped distribution, we consider the parameter inference under generalized Type-I hybrid censoring. First, estimations of the unknown scale parameter and the reliability function are obtained under the Bayesian method based on LINEX and squared error loss functions with a conjugate gamma prior. The comparison of estimations under the E-Bayesian method for different prior distributions and loss functions is analyzed. Additionally, Bayesian and E-Bayesian estimations with two unknown parameters are introduced. Furthermore, to verify the robustness of the estimations above, the Monte Carlo method is introduced for the simulation study. Finally, the application of the discussed inference in practice is illustrated by analyzing a real data set.


2016 ◽  
Vol 5 (4) ◽  
pp. 1
Author(s):  
Bander Al-Zahrani

The paper gives a description of estimation for the reliability function of weighted Weibull distribution. The maximum likelihood estimators for the unknown parameters are obtained. Nonparametric methods such as empirical method, kernel density estimator and a modified shrinkage estimator are provided. The Markov chain Monte Carlo method is used to compute the Bayes estimators assuming gamma and Jeffrey priors. The performance of the maximum likelihood, nonparametric methods and Bayesian estimators is assessed through a real data set.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Huibing Hao ◽  
Chun Su

A novel reliability assessment method for degradation product with two dependent performance characteristics (PCs) is proposed, which is different from existing work that only utilized one dimensional degradation data. In this model, the dependence of two PCs is described by the Frank copula function, and each PC is governed by a random effected nonlinear diffusion process where random effects capture the unit to unit differences. Considering that the model is so complicated and analytically intractable, Markov Chain Monte Carlo (MCMC) method is used to estimate the unknown parameters. A numerical example about LED lamp is given to demonstrate the usefulness and validity of the proposed model and method. Numerical results show that the random effected nonlinear diffusion model is very useful by checking the goodness of fit of the real data, and ignoring the dependence between PCs may result in different reliability conclusion.


2018 ◽  
Vol 232 ◽  
pp. 04019
Author(s):  
ShangBin Ning ◽  
FengChao Zuo

As a powerful and explainable blind separation tool, non-negative matrix factorization (NMF) is attracting increasing attention in Hyperspectral Unmixing(HU). By effectively utilizing the sparsity priori of data, sparsity-constrained NMF has become a representative method to improve the precision of unmixing. However, the optimization technique based on simple multiplicative update rules makes its unmixing results easy to fall into local minimum and lack of robustness. To solve these problems, this paper proposes a new hybrid algorithm for sparsity constrained NMF by intergrating evolutionary computing and multiplicative update rules (MURs). To find the superior solution in each iteration,the proposed algorithm effectively combines the MURs based on alternate optimization technique, the coefficient matrix selection strategy with sparsity measure, as well as the global optimization technique for basis matrix via the differential evolution algorithm .The effectiveness of the proposed method is demonstrated via the experimental results on real data and comparison with representative algorithms.


Author(s):  
Zhen Chen ◽  
Tangbin Xia ◽  
Ershun Pan

In this paper, a segmental hidden Markov model (SHMM) with continuous observations, is developed to tackle the problem of remaining useful life (RUL) estimation. The proposed approach has the advantage of predicting the RUL and detecting the degradation states simultaneously. As the observation space is discretized into N segments corresponding to N hidden states, the explicit relationship between actual degradation paths and the hidden states can be depicted. The continuous observations are fitted by Gaussian, Gamma and Lognormal distribution, respectively. To select a more suitable distribution, model validation metrics are employed for evaluating the goodness-of-fit of the available models to the observed data. The unknown parameters of the SHMM can be estimated by the maximum likelihood method with the complete data. Then a recursive method is used for RUL estimation. Finally, an illustrate case is analyzed to demonstrate the accuracy and efficiency of the proposed method. The result also suggests that SHMM with observation probability distribution which is closer to the real data behavior may be more suitable for the prediction of RUL.


2018 ◽  
Vol 8 (2) ◽  
pp. 377-406
Author(s):  
Almog Lahav ◽  
Ronen Talmon ◽  
Yuval Kluger

Abstract A fundamental question in data analysis, machine learning and signal processing is how to compare between data points. The choice of the distance metric is specifically challenging for high-dimensional data sets, where the problem of meaningfulness is more prominent (e.g. the Euclidean distance between images). In this paper, we propose to exploit a property of high-dimensional data that is usually ignored, which is the structure stemming from the relationships between the coordinates. Specifically, we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples. When the observable samples are generated by a nonlinear transformation of hidden variables, the Mahalanobis distance allows the recovery of the Euclidean distances in the hidden space. We illustrate the advantage of our approach on a synthetic example where the discovery of clusters of correlated coordinates improves the estimation of the principal directions of the samples. Our method was applied to real data of gene expression for lung adenocarcinomas (lung cancer). By using the proposed metric we found a partition of subjects to risk groups with a good separation between their Kaplan–Meier survival plot.


Sign in / Sign up

Export Citation Format

Share Document