modified likelihood
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 5)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Vol 50 (1) ◽  
pp. 88-104
Author(s):  
Tamae Kawasaki ◽  
Takashi Seo

This article deals with the problem of testing for two normal sub-mean vectors when the data set have two-step monotone missing observations. Under the assumptions that the population covariance matrices are equal, we obtain the likelihood ratio test (LRT) statistic. Furthermore, an asymptotic expansion for the null distribution of the LRT statistic is derived under the two-step monotone missing data by the perturbation method. Using the result, we propose two improved statistics with good chi-squared approximation. One is the modified LRT statistic by Bartlett correction,and the other is the modified LRT statistic using the modification coefficient by linear interpolation. The accuracy of the approximations are investigated by using a Monte Carlo simulation. The proposed methods are illustrated using an example.


2021 ◽  
Vol 9 (1) ◽  
pp. 157-175
Author(s):  
Walaa EL-Sharkawy ◽  
Moshira A. Ismail

This paper deals with testing the number of components in a Birnbaum-Saunders mixture model under randomly right censored data. We focus on two methods, one based on the modified likelihood ratio test and the other based on the shortcut of bootstrap test. Based on extensive Monte Carlo simulation studies, we evaluate and compare the performance of the proposed tests through their size and power. A power analysis provides guidance for researchers to examine the factors that affect the power of the proposed tests used in detecting the correct number of components in a Birnbaum-Saunders mixture model. Finally an example of aircraft Windshield data is used to illustrate the testing procedure.


2019 ◽  
Vol 47 (9) ◽  
pp. 1562-1586
Author(s):  
Ana C. Guedes ◽  
Francisco Cribari-Neto ◽  
Patrícia L. Espinheira

2018 ◽  
Vol 35 (15) ◽  
pp. 2545-2554 ◽  
Author(s):  
Joseph Mingrone ◽  
Edward Susko ◽  
Joseph P Bielawski

Abstract Motivation Likelihood ratio tests are commonly used to test for positive selection acting on proteins. They are usually applied with thresholds for declaring a protein under positive selection determined from a chi-square or mixture of chi-square distributions. Although it is known that such distributions are not strictly justified due to the statistical irregularity of the problem, the hope has been that the resulting tests are conservative and do not lose much power in comparison with the same test using the unknown, correct threshold. We show that commonly used thresholds need not yield conservative tests, but instead give larger than expected Type I error rates. Statistical regularity can be restored by using a modified likelihood ratio test. Results We give theoretical results to prove that, if the number of sites is not too small, the modified likelihood ratio test gives approximately correct Type I error probabilities regardless of the parameter settings of the underlying null hypothesis. Simulations show that modification gives Type I error rates closer to those stated without a loss of power. The simulations also show that parameter estimation for mixture models of codon evolution can be challenging in certain data-generation settings with very different mixing distributions giving nearly identical site pattern distributions unless the number of taxa and tree length are large. Because mixture models are widely used for a variety of problems in molecular evolution, the challenges and general approaches to solving them presented here are applicable in a broader context. Availability and implementation https://github.com/jehops/codeml_modl Supplementary information Supplementary data are available at Bioinformatics online.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 919
Author(s):  
María Martel-Escobar ◽  
Francisco-José Vázquez-Polo ◽  
Agustín Hernández-Bastida 

Problems in statistical auditing are usually one–sided. In fact, the main interest for auditors is to determine the quantiles of the total amount of error, and then to compare these quantiles with a given materiality fixed by the auditor, so that the accounting statement can be accepted or rejected. Dollar unit sampling (DUS) is a useful procedure to collect sample information, whereby items are chosen with a probability proportional to book amounts and in which the relevant error amount distribution is the distribution of the taints weighted by the book value. The likelihood induced by DUS refers to a 201–variate parameter p but the prior information is in a subparameter θ linear function of p , representing the total amount of error. This means that partial prior information must be processed. In this paper, two main proposals are made: (1) to modify the likelihood, to make it compatible with prior information and thus obtain a Bayesian analysis for hypotheses to be tested; (2) to use a maximum entropy prior to incorporate limited auditor information. To achieve these goals, we obtain a modified likelihood function inspired by the induced likelihood described by Zehna (1966) and then adapt the Bayes’ theorem to this likelihood in order to derive a posterior distribution for θ . This approach shows that the DUS methodology can be justified as a natural method of processing partial prior information in auditing and that a Bayesian analysis can be performed even when prior information is only available for a subparameter of the model. Finally, some numerical examples are presented.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 536
Author(s):  
C. Narayana ◽  
B. Mahaboob ◽  
B. Venkateswarlu ◽  
J. Ravi sankar ◽  
P. Balasiddamuni

In this research paper various new advanced inferential tools namely modified likelihood ratio (LR), Ward and Lagrange Multiplier test statistics have been proposed for testing general linear hypothesis in stochastic linear regression model. In this process internally studentized residuals have been used. This research study has brought out some new advance tools for analysing inferential aspects of stochastic linear regression models by using internally studentized residuals. Miguel Fonseca et.al [1] developed statistical inference in linear models dealing with the theory of maximum likelihood estimates and likelihood ratio tests under some linear inequality restrictions on the regression coefficients. Tim Coelli [2] used Monte carlo experimentation to investigate the finite sample properties of maximum likelihood (ML) and correct ordinary least squares (COLS) estimators of the half –normal stochastic frontier production function. In 2011, p. Bala siddamuni et.al [3] have developed advanced tools for mathematical and stochastic modelling.  


Sign in / Sign up

Export Citation Format

Share Document