estimate precision
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Ming Ma

Abstract Objective Survey research is widely used in social studies. Whereas it has been widely known that nonresponse might produce biased results and impair the precision, the pattern of the impact on the precision of the estimate due to the non-response in the different survey stages is historically overlooked, though such information is essential to guide the recruitment plan. This study proposed to examine and compare the effect of first and second level nonresponse on the precision of prevalence estimates in the multi-stage survey studies. Based on the benchmark dataset from a state level survey, we used simulation approach to create datasets with different first and second level nonresponse rates and then compare the margin of error (an indicator for the precision) for the 12 outcomes between datasets with first vs. second level nonresponse. Results At the same nonresponse rate, the mean margin of error was greater for the data with first level nonresponse, compared to the data with second level nonresponse. As the nonresponse rate increased, the loss of precision was more inflated with the data with first level nonresponse, suggesting that the effort for recruiting primary sampling units is more crucial to improve the estimate precision in survey studies.


Author(s):  
Ayesha Appa ◽  
Saki Takahashi ◽  
Isabel Rodriguez-Barraquer ◽  
Gabriel Chamie ◽  
Aenor Sawyer ◽  
...  

Abstract Background Limited systematic surveillance for SARS-CoV-2 in the early months of the United States epidemic curtailed accurate appraisal of transmission intensity. Our objective was to perform case detection of an entire rural community to quantify SARS-CoV-2 transmission using PCR and antibody testing. Methods We conducted a cross-sectional survey of SARS-CoV-2 infection in the rural town of Bolinas, California (population 1,620), four weeks following shelter-in-place orders. Participants were tested between April 20 th – 24 th, 2020. Prevalence by PCR and seroprevalence from two forms of antibody testing were performed in parallel (Abbott ARCHITECT IgG and in-house IgG ELISA). Results Of 1,891 participants, 1,312 were confirmed Bolinas residents (>80% community ascertainment). Zero participants were PCR positive. Assuming 80% sensitivity, it would have been unlikely to observe these results (p<0.05) if there were >3 active infections in the community. Based on antibody results, estimated prevalence of prior infection was 0.16% (95% CrI: 0.02%, 0.46%). The positive predictive value (PPV) of a positive result on both tests was 99.11% (95% CrI: 95.75%, 99.94%), compared to PPV 44.19%-63.32% (95% CrI range 3.25%-98.64%) if one test was utilized. Conclusions Four weeks following shelter-in-place, SARS-CoV-2 infection in a rural Northern California community was extremely rare. In this low prevalence setting, use of two antibody tests increased seroprevalence estimate precision. This was one of the first community-wide studies to successfully implement synchronous PCR and antibody testing, particularly in a rural setting. Widespread testing remains an underpinning of effective disease control in conjunction with consistent uptake of public health measures.


Author(s):  
Kris V Parag ◽  
Oliver G Pybus ◽  
Chieh-Hsi Wu

AbstractIn Bayesian phylogenetics, the coalescent process provides an informative framework for inferring dynamical changes in the effective size of a population from a sampled phylogeny (or tree) of its sequences. Popular coalescent inference methods such as the Bayesian Skyline Plot, Skyride and Skygrid all model this population size with a discontinuous, piecewise-constant likelihood but apply a smoothing prior to ensure that posterior population size estimates transition gradually with time. These prior distributions implicitly encode extra population size information that is not available from the observed coalescent tree (data). Here we present a novel statistic, Ω, to quantify and disaggregate the relative contributions of the coalescent data and prior assumptions to the resulting posterior estimate precision. Our statistic also measures the additional mutual information introduced by such priors. Using Ω we show that, because it is surprisingly easy to over-parametrise piecewise-constant population models, common smoothing priors can lead to overconfident and potentially misleading conclusions, even under robust experimental designs. We propose Ω as a useful tool for detecting when posterior estimate precision is overly reliant on prior choices.


2019 ◽  
Vol 34 (23-24) ◽  
pp. 4838-4859 ◽  
Author(s):  
Marcus E. Berzofsky ◽  
Lynn Langton ◽  
Christopher Krebs ◽  
Christine Lindquist ◽  
Michael Planty

Many colleges and universities conduct web-based campus climate surveys to understand the prevalence and nature of sexual assault among their students. When designing and fielding a web survey to measure a sensitive topic like sexual assault, methodological decisions, including the length of the field period and the use or amount of an incentive, can affect the representativeness of the respondent sample leading to biased or imprecise estimates. This study uses data from the Campus Climate Survey Validation Study (CCSVS) to assess how the interaction between field period length and survey incentive amount affects nonresponse, sample representativeness, and the precision of survey estimates. Research suggests that using robust incentives gives potential survey respondents a reason to complete the survey beyond their intrinsic motivation to do so. Likewise, extending the field period gives more time to people who may be less intrinsically motivated to complete the survey. Both serve to increase sample size and representativeness, minimize bias, and improve estimate precision. Schools, however, sometimes lack the time and/or resources for both a robust incentive and a lengthy field period, and this study examines the extent to which the potential negative impacts of not using one can be mitigated by the presence of the other. Findings indicate that target response rates can be achieved using a smaller incentive if the field period is lengthy but, even with a lengthy field period, the use of a smaller incentive can result in biased estimates due to a lack of representativeness. Conversely, when a robust incentive is used and weights are developed to adjust for nonresponse, a shorter field period will not have a significant impact on point estimates, but the estimates will be less precise due to fewer respondents participating in the survey.


2018 ◽  
Author(s):  
Levi John Wolf ◽  
Luc Anselin ◽  
Daniel Arribas-Bel ◽  
Lee Rivers Mobley

Multilevel models have been applied to study many geographical processes in epidemiology, economics, political science, sociology, urban analytics, and transportation. They are most often used to express how the effect of a treatment or intervention may vary by geographical group, a form of spatial process heterogeneity. In addition, these models provide a notion of "platial" dependence: observations that are within the same geographical place are modeled as similar to one another. Recent work has shown that spatial dependence can be introduced into multilevel models, and has examined the empirical properties of these models' estimates. However, systematic attention to the mathematical structure of these models has been lacking. This paper examines a kind of multilevel model that includes both "platial" and "spatial" dependence. Using mathematical analysis, we obtain the relationship between classic multilevel, spatial multilevel, and single-level models. This mathematical structure exposes a tension between a main benefit of multilevel models, estimate shrinkage, and the effects of spatial dependence. We show, both mathematically and empirically, that classic multilevel models may overstate estimate precision and understate estimate shrinkage when spatial dependence is present. This result extends long-standing results in single-level modeling to mutilevel models.


2018 ◽  
Author(s):  
Joel Eduardo Martinez ◽  
Friederike Funk ◽  
Alexander Todorov

Identifying relative idiosyncratic and shared contributions to judgments is a fundamental challenge to the study of human behavior, yet there is no established method for estimating these contributions. Using edge cases of stimuli varying in intra-rater reliability and inter-rater agreement – faces (high on both), objects (high on the former, low on the latter), and complex patterns (low on both) – we show that variance component analyses (VCAs) accurately captured the psychometric properties of the data (Study 1). Simulations showed the VCA generalizes to any arbitrary continuous rating and both sample and stimulus set size affect estimate precision (Study 2). Generally, a minimum of 60 raters and 30 stimuli provided reasonable estimates within our simulations. Furthermore, VCA estimates stabilized given more than two repeated measures, consistent with the finding that both intra-rater reliability and inter-rater agreement increased nonlinearly with repeated measures (Study 3). The VCA provides a rigorous examination of where variance lies in the data, can be implement using mixed models with crossed random effects, and is general enough to be useful in any judgment domain where agreement and disagreement are important to quantify and multiple raters independently rate multiple stimuli.


2017 ◽  
Vol 3 (4) ◽  
pp. 57 ◽  
Author(s):  
Felix Bongomin ◽  
Sara Gago ◽  
Rita Oladele ◽  
David Denning

2016 ◽  
Vol 91 (1-2) ◽  
pp. 161-176
Author(s):  
Maral Kichian

The natural rate of interest is an unobservable entity and its measurement presents some important empirical challenges. In this paper, we use identification-robust methods and central bank real-time staff projections to obtain estimates for the equilibrium real rate from contemporaneous and forward-looking Taylor-type interest rate rules. The methods notably account for the potential presence of endogeneity, under-identification, and errors-in-variables concerns. Our applications are conducted on Canadian data. The results reveal some important identification difficulties associated with some of our models, reinforcing the need to use identification-robust methods to estimate such policy functions. Despite these challenges, we are able to obtain fairly comparable point estimates for the real equilibrium interest rate across our different models, and in the case of the best fitting model, also remarkable estimate precision.


2016 ◽  
Vol 124 ◽  
pp. 155-158 ◽  
Author(s):  
Daniel A. Goncalves ◽  
Bradley T. Jones ◽  
George L. Donati

Sign in / Sign up

Export Citation Format

Share Document