likelihood ratio statistic
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 21)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Michael P. Ward ◽  
Yuanhua Liu ◽  
Shuang Xiao ◽  
Zhijie Zhang

Abstract Background Since the appearance of severe acute respiratory coronavirus 2 (SARS-CoV-2) and the coronavirus disease 2019 (COVID-19) pandemic, a growing body of evidence has suggested that weather factors, particularly temperature and humidity, influence transmission. This relationship might differ for the recently emerged B.1.617.2 (delta) variant of SARS-CoV-2. Here we use data from an outbreak in Sydney, Australia that commenced in winter and time-series analysis to investigate the association between reported cases and temperature and relative humidity. Methods Between 16 June and 10 September 2021, the peak of the outbreak, there were 31,662 locally-acquired cases reported in five local health districts of Sydney, Australia. The associations between daily 9:00 am and 3:00 pm temperature (°C), relative humidity (%) and their difference, and a time series of reported daily cases were assessed using univariable and multivariable generalized additive models and a 14-day exponential moving average. Akaike information criterion (AIC) and the likelihood ratio statistic were used to compare different models and determine the best fitting model. A sensitivity analysis was performed by modifying the exponential moving average. Results During the 87-day time-series, relative humidity ranged widely (< 30–98%) and temperatures were mild (approximately 11–17 °C). The best-fitting (AIC: 1,119.64) generalized additive model included 14-day exponential moving averages of 9:00 am temperature (P < 0.001) and 9:00 am relative humidity (P < 0.001), and the interaction between these two weather variables (P < 0.001). Humidity was negatively associated with cases no matter whether temperature was high or low. The effect of lower relative humidity on increased cases was more pronounced below relative humidity of about 70%; below this threshold, not only were the effects of humidity pronounced but also the relationship between temperature and cases of the delta variant becomes apparent. Conclusions We suggest that the control of COVID-19 outbreaks, specifically those due to the delta variant, is particularly challenging during periods of the year with lower relative humidity and warmer temperatures. In addition to vaccination, stronger implementation of other interventions such as mask-wearing and social distancing might need to be considered during these higher risk periods. Graphical Abstract


2021 ◽  
pp. 1-9
Author(s):  
Qinqin Jin ◽  
Gang Shi

Many complex diseases are caused by single nucleotide polymorphisms (SNPs), environmental factors, and the interaction between SNPs and environment. Joint tests of the SNP and SNP-environment interaction effects (JMA) and meta-regression (MR) are commonly used to evaluate these SNP-environment interactions. However, these two methods do not consider genetic heterogeneity. We previously presented a random-effect MR, which provided higher power than the MR in datasets with high heterogeneity. However, this method requires group-level data, which sometimes are not available. Given this, we designed this study to evaluate the introduction of the random effects of SNP and SNP-environment interaction into the JMA, and then extended this to the random effect model. Likelihood ratio statistic is applied to test the JMA and the new method we proposed in this paper. We evaluated the null distributions of these tests, and the powers for this method. This method was verified by simulation and was shown to provide similar powers to the random effect meta-regression method (RMR). However, this method only requires study-level data which relaxed the condition of the RMR. Our study suggests that this method is more suitable for finding the association between SNP and diseases in the absence of group-level data.


2021 ◽  
Author(s):  
Yayi Yan ◽  
Tingting Cheng

Abstract This paper introduces a factor-augmented forecasting regression model in the presence of threshold effects. We consider least squares estimation of the regression parameters, and establish asymptotic theories for estimators of both slope coefficients and the threshold parameter. Prediction intervals are also constructed for factor-augmented forecasts. Moreover, we develop a likelihood ratio statistic for tests on the threshold parameter and a sup-Wald test statistic for tests on the presence of threshold effects, respectively. Simulation results show that the proposed estimation method and testing procedures work very well in finite samples. Finally, we demonstrate the usefulness of the proposed model through an application to forecasting stock market returns.


2021 ◽  
Vol 14 ◽  
pp. 1-8
Author(s):  
Yook-Ngor Phang ◽  
Seng-Huat Ong ◽  
Yeh-Ching Low

The Poisson inverse Gaussian and generalized Poisson distributions are widely used in modelling overdispersed count data which are commonly found in healthcare, insurance, engineering, econometric and ecology. The inverse trinomial distribution is a relatively new count distribution arising from a one-dimensional random walk model (Shimizu & Yanagimoto, 1991). The Poisson inverse Gaussian distribution is a popular count model that has been proposed as an alternative to the negative binomial distribution. The inverse trinomial and generalized Poisson models possess a common characteristic of having a cubic variance function, while the Poisson inverse Gaussian has a quadratic variance function. The nature of the variance function seems to be an important property in modelling overdispersed count data. Hence it is of interest to be able to select among the three models in practical applications. This paper considers discrimination of three models based on the likelihood ratio statistic and computes via Monte Carlo simulation the probability of correct selection.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xin Qi ◽  
ZhuoXi Yu

In this paper, the authors consider the application of the blockwise empirical likelihood method to the partially linear single-index model when the errors are negatively associated, which often exist in sequentially collected economic data. Thereafter, the blockwise empirical likelihood ratio statistic for the parameters of interest is proved to be asymptotically chi-squared. Hence, it can be directly used to construct confidence regions for the parameters of interest. A few simulation experiments are used to illustrate our proposed method.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Cuixin Peng ◽  
Zhiwen Zhao

AbstractThis paper considers the parameter estimation problem of a first-order threshold autoregressive conditional heteroscedasticity model by using the empirical likelihood method. We obtain the empirical likelihood ratio statistic based on the estimating equation of the least squares estimation and construct the confidence region for the model parameters. Simulation studies indicate that the empirical likelihood method outperforms the normal approximation-based method in terms of coverage probability.


2021 ◽  
Author(s):  
Marek Pecha

Determining the number of groups or dimension of a feature space related to an initial dataset projected to null-space of the Laplace-Beltrami-type operators is a fundamental problem of applications exploiting a spectral clustering techniques. This paper theoretically focuses on generalizing and providing minor comments to a previous work by Bruneau et al., who proposed modification of the Bartlett test that is commonly used in the principal component analysis, to estimate the number of groups related to normalized spectral clustering approaches. The generalization is based on a relation between the distributions of the spectrum associated with a covariance matrix and graph Laplacian, which allows us to use the modified Bartlett test for unnormalized spectral clustering as well. Other comments follow previous works by Lawley and James, which allow us testing subsets of eigenvalues by involving likelihood ratio statistic and linkage factors. Solving issues arising from limits of floating-point arithmetic are demonstrated on benchmarks employing spectral clustering for $2$-phase volumetric image segmentation. On a same problem, analysing spectral clustering in divide-merge settings is presented.


2021 ◽  
Author(s):  
Marek Pecha

Determining the number of groups or dimension of a feature space related to an initial dataset projected to null-space of the Laplace-Beltrami-type operators is a fundamental problem of applications exploiting a spectral clustering techniques. This paper theoretically focuses on generalizing and providing minor comments to a previous work by Bruneau et al., who proposed modification of the Bartlett test that is commonly used in the principal component analysis, to estimate the number of groups related to normalized spectral clustering approaches. The generalization is based on a relation between the distributions of the spectrum associated with a covariance matrix and graph Laplacian, which allows us to use the modified Bartlett test for unnormalized spectral clustering as well. Other comments follow previous works by Lawley and James, which allow us testing subsets of eigenvalues by involving likelihood ratio statistic and linkage factors. Solving issues arising from limits of floating-point arithmetic are demonstrated on benchmarks employing spectral clustering for $2$-phase volumetric image segmentation. On a same problem, analysing spectral clustering in divide-merge settings is presented.


Sign in / Sign up

Export Citation Format

Share Document