Reliability Analysis for Masked Data under Type-II Generalized Progressively Hybrid Censored Scheme

2014 ◽  
Vol 687-691 ◽  
pp. 1198-1201
Author(s):  
Bin Liu ◽  
Yi Min Shi ◽  
Jing Cai ◽  
Mo Chen

The Type-II generalized progressively hybrid censored scheme with masked data is presented. Based on masked system lifetime data, using the expectation maximization algorithm and the Quasi-Newton method, we obtain the Maximum Likelihood Estimation (MLE) of the components distribution parameters in the Weibull case. Finally, Monte Carlo simulation is presented to illustrate the effect.

ANRI ◽  
2021 ◽  
Vol 0 (2) ◽  
pp. 54-64
Author(s):  
Aliaksei Zaharadniuk ◽  
Dmitri Abalonski ◽  
Raman Lukashevich

The paper considers an algorithm for correcting the instrumental spectrum of the gamma-radiometer with a NaI(Tl) detector. The algorithm is based on the ME-LM method (Maximum Likelihood Estimation using Expectation Maximization), which uses a detector response matrix obtained by Monte Carlo simulation. The main advantage of the algorithm is the rescaling procedure, which significantly reduces the spectrum processing time.


2014 ◽  
Vol 46 (4) ◽  
pp. 731-741 ◽  
Author(s):  
David Doyle ◽  
Robert Elgie

This article aims to maximize the reliability of presidential power scores for a larger number of countries and time periods than currently exists for any single measure, and in a way that is replicable and easy to update. It begins by identifying all of the studies that have estimated the effect of a presidential power variable, clarifying what scholars have attempted to capture when they have operationalized the concept of presidential power. It then identifies all the measures of presidential power that have been proposed over the years, noting the problems associated with each. To generate the new set of presidential power scores, the study draws upon the comparative and local knowledge embedded in existing measures of presidential power. Employing principal component analysis, together with the expectation maximization algorithm and maximum likelihood estimation, a set of presidential power scores is generated for a larger set of countries and country time periods than currently exists, reporting 95 per cent confidence intervals and standard errors for the scores. Finally, the implications of the new set of scores for future studies of presidential power is discussed.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Shanghai Jiang ◽  
Peng He ◽  
Luzhen Deng ◽  
Mianyi Chen ◽  
Biao Wei

X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images.


2018 ◽  
Vol 12 (3) ◽  
pp. 253-272 ◽  
Author(s):  
Chanseok Park

The expectation–maximization algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The expectation–maximization is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the expectation–maximization algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo expectation–maximization algorithm. The Monte Carlo expectation–maximization uses a random sample to estimate the integral at each E-step. But the problem with the Monte Carlo expectation–maximization is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes computational burden. In this paper, we propose what we refer to as the quantile variant of the expectation–maximization algorithm. We prove that the proposed method has an accuracy of [Formula: see text], while the Monte Carlo expectation–maximization method has an accuracy of [Formula: see text]. Thus, the proposed method possesses faster and more stable convergence properties when compared with the Monte Carlo expectation–maximization algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided.


Sign in / Sign up

Export Citation Format

Share Document