scholarly journals Sequential Test for a Mixture of Finite Exponential Distribution

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
A.S. Al-Moisheer

Testing the number of components in a finite mixture is considered one of the challenging problems. In this paper, exponential finite mixtures are used to determine the number of components in a finite mixture. A sequential testing procedure is adopted based on the likelihood ratio test (LRT) statistic. The distribution of the test statistic under the null hypothesis is obtained using a resampling technique based on B bootstrap samples. The quantiles of the distribution of the test statistic are evaluated from the B bootstrap samples. The performance of the test is examined through the empirical power and application on two real datasets. The proposed procedure is not only used for testing the number of components but also for estimating the optimal number of components in a finite exponential mixture distribution. The innovation of this paper is the sequential test, which tests the more general hypothesis of a finite exponential mixture of k components versus a mixture of k + 1 components. The special case of testing an exponential mixture of one component versus two components is the one commonly used in the literature.

Author(s):  
Luboš Střelec ◽  
Milan Stehlík

The aim of this paper is to present and discuss the power of the exact likelihood ratio homogeneity testing procedure of the number of components k in the exponential mixture. First we present the likelihood ratio test for homogeneity (ELR), the likelihood ratio test for homogeneity against two-component exponential mixture (ELR2), and finally the likelihood ratio test for homogeneity against three-component exponential mixture (ELR3). Comparative power study of mentioned homogeneity tests against three-component subpopulation alternative is provided. Therein we concentrate on various setups of the scales and weights, which allow us to make conclusions for generic settings. The natural property is observed, namely increase of the power of exact likelihood ratio ELR, ELR2 and ELR3 tests with scale parameters considered in the alternative. We can state that the differences in power of ELR, ELR2 and ELR3 tests are small – therefore using of the computationally simpler ELR2 test is recommended for broad usage rather than computationally more expensive ELR3 test in the cases when unobserved heterogeneity is modelled. Anyhow caution should be taken before automatic usage of ELR3 in more informative settings, since the application of automatic methods hoping that the data will enforce its true structure is deceptive. Application of obtained results in reliability, finance or social sciences is straightforward.


2021 ◽  
Vol 9 (1) ◽  
pp. 157-175
Author(s):  
Walaa EL-Sharkawy ◽  
Moshira A. Ismail

This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. A power analysis provides guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.


2008 ◽  
Vol 06 (02) ◽  
pp. 261-282 ◽  
Author(s):  
AO YUAN ◽  
WENQING HE

Clustering is a major tool for microarray gene expression data analysis. The existing clustering methods fall mainly into two categories: parametric and nonparametric. The parametric methods generally assume a mixture of parametric subdistributions. When the mixture distribution approximately fits the true data generating mechanism, the parametric methods perform well, but not so when there is nonnegligible deviation between them. On the other hand, the nonparametric methods, which usually do not make distributional assumptions, are robust but pay the price for efficiency loss. In an attempt to utilize the known mixture form to increase efficiency, and to free assumptions about the unknown subdistributions to enhance robustness, we propose a semiparametric method for clustering. The proposed approach possesses the form of parametric mixture, with no assumptions to the subdistributions. The subdistributions are estimated nonparametrically, with constraints just being imposed on the modes. An expectation-maximization (EM) algorithm along with a classification step is invoked to cluster the data, and a modified Bayesian information criterion (BIC) is employed to guide the determination of the optimal number of clusters. Simulation studies are conducted to assess the performance and the robustness of the proposed method. The results show that the proposed method yields reasonable partition of the data. As an illustration, the proposed method is applied to a real microarray data set to cluster genes.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 936
Author(s):  
Dan Wang

In this paper, a ratio test based on bootstrap approximation is proposed to detect the persistence change in heavy-tailed observations. This paper focuses on the symmetry testing problems of I(1)-to-I(0) and I(0)-to-I(1). On the basis of residual CUSUM, the test statistic is constructed in a ratio form. I prove the null distribution of the test statistic. The consistency under alternative hypothesis is also discussed. However, the null distribution of the test statistic contains an unknown tail index. To address this challenge, I present a bootstrap approximation method for determining the rejection region of this test. Simulation studies of artificial data are conducted to assess the finite sample performance, which shows that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates the excellent performance of this method.


1993 ◽  
Vol 9 (3) ◽  
pp. 431-450 ◽  
Author(s):  
Noel Cressie ◽  
Peter B. Morgan

Under more general assumptions than those usually made in the sequential analysis literature, a variable-sample-size-sequential probability ratio test (VPRT) of two simple hypotheses is found that maximizes the expected net gain over all sequential decision procedures. In contrast, Wald and Wolfowitz [25] developed the sequential probability ratio test (SPRT) to minimize expected sample size, but their assumptions on the parameters of the decision problem were restrictive. In this article we show that the expected net-gain-maximizing VPRT also minimizes the expected (with respect to both data and prior) total sampling cost and that, under slightly more general conditions than those imposed by Wald and Wolfowitz, it reduces to the one-observation-at-a-time sequential probability ratio test (SPRT). The ways in which the size and power of the VPRT depend upon the parameters of the decision problem are also examined.


Sign in / Sign up

Export Citation Format

Share Document