coverage probabilities
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 41)

H-INDEX

15
(FIVE YEARS 2)

FEMS Microbes ◽  
2022 ◽  
Author(s):  
Alessandro Zulli ◽  
Annabelle Pan ◽  
Stephen M Bart ◽  
Forrest W Crawford ◽  
Edward H Kaplan ◽  
...  

Abstract We assessed the relationship between municipality COVID-19 case rates and SARS-CoV-2 concentrations in the primary sludge of corresponding wastewater treatment facilities. Over 1,700 daily primary sludge samples were collected from six wastewater treatment facilities with catchments serving 18 cities and towns in the State of Connecticut, USA. Samples were analyzed for SARS-CoV-2 RNA concentrations during a 10 month time period that overlapped with October 2020 and winter/spring 2021 COVID-19 outbreaks in each municipality. We fit lagged regression models to estimate reported case rates in the six municipalities from SARS-CoV-2 RNA concentrations collected daily from corresponding wastewater treatment facilities. Results demonstrate the ability of SARS-CoV-2 RNA concentrations in primary sludge to estimate COVID-19 reported case rates across treatment facilities and wastewater catchments, with coverage probabilities ranging from 0.94 to 0.96. Lags of 0 to 1 days resulted in the greatest predictive power for the model. Leave-one-out cross validation suggests that the model can be broadly applied to wastewater catchments that range in more than one order of magnitude in population served. The close relationship between case rates and SARS-CoV-2 concentrations demonstrates the utility of using primary sludge samples for monitoring COVID-19 outbreak dynamics. Estimating case rates from wastewater data can be useful in locations with limited testing availability, testing disparities, or delays in individual COVID-19 testing programs.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 167
Author(s):  
Niansheng Tang ◽  
Fan Liang

Various approaches including hypothesis test and confidence interval (CI) construction have been proposed to assess non-inferiority and assay sensitivity via a known fraction or pre-specified margin in three-arm trials with continuous or discrete endpoints. However, there is little work done on the construction of the non-inferiority margin from historical data and simultaneous generalized CIs (SGCIs) in a three-arm trial with the normally distributed endpoints. Based on the generalized fiducial method and the square-and-add method, we propose two simultaneous CIs for assessing non-inferiority and assay sensitivity in a three-arm trial. For comparison, we also consider the Wald-type Bonferroni simultaneous CI and parametric bootstrap simultaneous CI. An algorithm for evaluating the optimal sample size for attaining the pre-specified power is given. Simulation studies are conducted to investigate the performance of the proposed CIs in terms of their empirical coverage probabilities. An example taken from the mildly asthmatic study is illustrated using the proposed simultaneous CIs. Empirical results show that the proposed generalized fiducial method and the square-and-add method behave better than other two compared CIs.


Author(s):  
Theerapong Kaewprasert ◽  
Sa-Aat Niwitpong ◽  
Suparat Niwitpong

Herein, we present four methods for constructing confidence intervals for the ratio of the coefficients of variation of inverse-gamma distributions using the percentile bootstrap, fiducial quantities, and Bayesian methods based on the Jeffreys and uniform priors. We compared their performances using coverage probabilities and expected lengths via simulation studies. The results show that the confidence intervals constructed with the Bayesian method based on the uniform prior and fiducial quantities performed better than those constructed with the Bayesian method based on the Jeffreys prior and the percentile bootstrap. Rainfall data from Thailand was used to illustrate the efficacies of the proposed methods.


2021 ◽  
pp. 1-31
Author(s):  
Zheng Fang ◽  
Qi Li ◽  
Karen X. Yan

In this paper, we present a new nonparametric method for estimating a conditional quantile function and develop its weak convergence theory. The proposed estimator is computationally easy to implement and automatically ensures quantile monotonicity by construction. For inference, we propose to use a residual bootstrap method. Our Monte Carlo simulations show that this new estimator compares well with the check-function-based estimator in terms of estimation mean squared error. The bootstrap confidence bands yield adequate coverage probabilities. An empirical example uses a dataset of Canadian high school graduate earnings, illustrating the usefulness of the proposed method in applications.


Author(s):  
Louis Asiedu ◽  
Felix Mettle ◽  
Emmanuel Aidoo ◽  
Stella Lawerh

The main aim of this study is to fit a model for predicting pension liability. The study proposed a stochastic population model to determine the status of a pension scheme. By categorizing the members of the Social Security and National Insurance Trust (SSNIT) pension scheme of Ghana into five groups, the birth and death process with emigration and the pure death process coupled with assumption of the Yule’s process, were combined to successfully formulate a model for forecasting the surplus of SSNIT to be used as a proxy for assessing the solvency status of the scheme. The reliability of the proposed model was corroborated by very high coverage probabilities of the estimates of expected surpluses produced.  The study demonstrated how easy it is to use the proposed model to carry out sensitivity analysis which allows the exploration of various scenarios leading to formulation and implementation of policies to enhance the solvency of the scheme. One major advantage of the proposed model is that, it uses more information (variables) compared to others proposed elsewhere for the same purpose. This contributes to the precision of estimates from the model. A key finding of the study is that SSNIT would have still been solvent had she increased pension by 50%.  


2021 ◽  
pp. 000806832110511
Author(s):  
Nitis Mukhopadhyay

We begin with an overview on variance stabilizing transformations (VST) along with three classical examples for completeness: the arcsine, square-root and Fisher's z-transformations (Examples 1–3). Then, we construct three new examples (Examples 4–6) of VST-based and central limit theorem (CLT)’based large-sample confidence interval methodologies. These are special examples in the sense that in each situation, we also have an exact confidence interval procedure for the parameter of interest. Tables 1–3 obtained exclusively under Examples 4–6 via exact calculations show that the VST-based (a) large-sample confidence interval methodology wins over the CLT-based large-sample confidence interval methodology, (b) confidence intervals’ exact coverage probabilities are better than or nearly same as those associated with the exact confidence intervals and (c) intervals are never wider (in the log-scale) than the CLT-based intervals across the board. A possibility of such a surprising behaviour of the VST-based confidence intervals over the exact intervals was not on our radar when we began this investigation. Indeed the VST-based inference methodologies may do extremely well, much more so than the existing literature reveals as evidenced by the new Examples 4–6. AMS subject classifications: 62E20; 62F25; 62F12


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1394
Author(s):  
Mustapha Muhammad ◽  
Huda M. Alshanbari ◽  
Ayed R. A. Alanzi ◽  
Lixia Liu ◽  
Waqas Sami ◽  
...  

In this article, we propose the exponentiated sine-generated family of distributions. Some important properties are demonstrated, such as the series representation of the probability density function, quantile function, moments, stress-strength reliability, and Rényi entropy. A particular member, called the exponentiated sine Weibull distribution, is highlighted; we analyze its skewness and kurtosis, moments, quantile function, residual mean and reversed mean residual life functions, order statistics, and extreme value distributions. Maximum likelihood estimation and Bayes estimation under the square error loss function are considered. Simulation studies are used to assess the techniques, and their performance gives satisfactory results as discussed by the mean square error, confidence intervals, and coverage probabilities of the estimates. The stress-strength reliability parameter of the exponentiated sine Weibull model is derived and estimated by the maximum likelihood estimation method. Also, nonparametric bootstrap techniques are used to approximate the confidence interval of the reliability parameter. A simulation is conducted to examine the mean square error, standard deviations, confidence intervals, and coverage probabilities of the reliability parameter. Finally, three real applications of the exponentiated sine Weibull model are provided. One of them considers stress-strength data.


2021 ◽  
Vol 5 (5) ◽  
pp. 755-774
Author(s):  
Yadpirun Supharakonsakun

The Bayesian approach, a non-classical estimation technique, is very widely used in statistical inference for real world situations. The parameter is considered to be a random variable, and knowledge of the prior distribution is used to update the parameter estimation. Herein, two Bayesian approaches for Poisson parameter estimation by deriving the posterior distribution under the squared error loss or quadratic loss functions are proposed. Their performances were compared with frequentist (maximum likelihood estimator) and Empirical Bayes approaches through Monte Carlo simulations. The mean square error was used as the test criterion for comparing the methods for point estimation; the smallest value indicates the best performing method with the estimated parameter value closest to the true parameter value. Coverage Probabilities (CPs) and average lengths (ALs) were obtained to evaluate the performances of the methods for constructing confidence intervals. The results reveal that the Bayesian approaches were excellent for point estimation when the true parameter value was small (0.5, 1 and 2). In the credible interval comparison, these methods obtained CP values close to the nominal 0.95 confidence level and the smallest ALs for large sample sizes (50 and 100), when the true parameter value was small (0.5, 1 and 2). Doi: 10.28991/esj-2021-01310 Full Text: PDF


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ba Chu

Abstract This paper introduces an unbiased estimator based on least squares involving time-specific cross-sectional averages for a first-order panel autoregression with a strictly exogenous covariate. The proposed estimator is straightforward to implement as long as the variables of interest have sufficient time variation. The number of cross-sections (N) and the number of time periods (T) can be large, and there is no restriction on the growth rate of N relative to T. It is demonstrated via both theory and a simulation study that the estimator is asymptotically unbiased, and it can provide correct empirical coverage probabilities for the ‘true’ coefficients of the model for various combinations of N and T. An empirical application is also provided to confirm the feasibility of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document