scholarly journals Weighting Large Datasets with Complex Sampling Designs: Choosing the Appropriate Variance Estimation Method

2011 ◽  
Vol 10 (1) ◽  
pp. 110-115 ◽  
Author(s):  
Sara Mann ◽  
James Chowhan
2016 ◽  
Vol 35 (4) ◽  
Author(s):  
Helga Wagner ◽  
Doris Eckmair

Choosing the appropriate variance estimation method in complex surveys is a difficult task since there exist a variety of techniques which usually cannot be compared mathematically. A relatively easy way to accomplish such a comparison is on the basis of simulation studies. Though simulation studies are widely used in statistics, they are not a standard tool for investigating properties of estimators in complex survey sampling designs. In this paper we describe the setup for a simulation study according to the sampling plan of the Austrian Microcensus (AMC), used 1994–2003 which is an example for a very complex sampling plan. To illustrate the proceeding we conducted a simulation study comparing basic variance estimators. Results of the study reveal the extent to which simple variance estimators may underestimate the true sampling error in close to reality situations.


2020 ◽  
Vol 165 ◽  
pp. 03005
Author(s):  
Li Jianzhang

Using the precision trigonometric elevation instead of the precision levelling to build a CPⅢ elevation control network will greatly increase the speed of CPⅢ control network construction. However, the accuracy of CPIII precision trigonometric elevation control network is still difficult to reach the level of CPⅢ precision levelling network. Based on the existing parameter method, this paper introduces some precision levelling for joint adjustment, and uses Helmert’s variance estimation method to perform strict weight determination. Our experiments show that when the number of precision levelling participating in the joint adjustment exceeds 1/3 of the total number of CPⅢ precision levelling network observations, the accuracy of the CPIII precision trigonometric elevation control network can be effectively improved.


Author(s):  
Michael Osei Mireku ◽  
Alina Rodriguez

The objective was to investigate the association between time spent on waking activities and nonaligned sleep duration in a representative sample of the US population. We analysed time use data from the American Time Use Survey (ATUS), 2015–2017 (N = 31,621). National Sleep Foundation (NSF) age-specific sleep recommendations were used to define recommended (aligned) sleep duration. The balanced, repeated, replicate variance estimation method was applied to the ATUS data to calculate weighted estimates. Less than half of the US population had a sleep duration that mapped onto the NSF recommendations, and alignment was higher on weekdays (45%) than at weekends (33%). The proportion sleeping longer than the recommended duration was higher than those sleeping shorter on both weekdays and weekends (p < 0.001). Time spent on work, personal care, socialising, travel, TV watching, education, and total screen time was associated with nonalignment to the sleep recommendations. In comparison to the appropriate recommended sleep group, those with a too-short sleep duration spent more time on work, travel, socialising, relaxing, and leisure. By contrast, those who slept too long spent relatively less time on each of these activities. The findings indicate that sleep duration among the US population does not map onto the NSF sleep recommendations, mostly because of a higher proportion of long sleepers compared to short sleepers. More time spent on work, travel, and socialising and relaxing activities is strongly associated with an increased risk of nonalignment to NSF sleep duration recommendations.


2021 ◽  
Author(s):  
Aja Louise Murray ◽  
Anastasia Ushakova ◽  
Helen Wright ◽  
Tom Booth ◽  
Peter Lynn

Complex sampling designs involving features such as stratification, cluster sampling, and unequal selection probabilities are often used in large-scale longitudinal surveys to improve cost-effectiveness and ensure adequate sampling of small or under-represented groups. However, complex sampling designs create challenges when there is a need to account for non-random attrition; a near inevitability in social science longitudinal studies. In this article we discuss these challenges and demonstrate the application of weighting approaches to simultaneously account for non-random attrition and complex design in a large UK-population representative survey. Using an auto-regressive latent trajectory model with structured residuals (ALT-SR) to model the relations between relationship satisfaction and mental health in the Understanding Society study as an example, we provide guidance on implementation of this approach in both R and Mplus is provided. Two standard error estimation approaches are illustrated: pseudo-maximum likelihood robust estimation and Bootstrap resampling. A comparison of unadjusted and design-adjusted results also highlights that ignoring the complex survey designs when fitting structural equation models can result in misleading conclusions.


1997 ◽  
Vol 54 (3) ◽  
pp. 616-630 ◽  
Author(s):  
S J Smith

Trawl surveys using stratified random designs are widely used on the east coast of North America to monitor groundfish populations. Statistical quantities estimated from these surveys are derived via a randomization basis and do not require that a probability model be postulated for the data. However, the large sample properties of these estimates may not be appropriate for the small sample sizes and skewed data characteristic of bottom trawl surveys. In this paper, three bootstrap resampling strategies that incorporate complex sampling designs are used to explore the properties of estimates for small sample situations. A new form for the bias-corrected and accelerated confidence intervals is introduced for stratified random surveys. Simulation results indicate that the bias-corrected and accelerated confidence limits may overcorrect for the trawl survey data and that percentile limits were closer to the expected values. Nonparametric density estimates were used to investigate the effects of unusually large catches of fish on the bootstrap estimates and confidence intervals. Bootstrap variance estimates decreased as increasingly smoother distributions were assumed for the observations in the stratum with the large catch. Lower confidence limits generally increased with increasing smoothness but the upper bound depended upon assumptions about the shape of the distribution.


2020 ◽  
Vol 2020 (1) ◽  
pp. 1-20
Author(s):  
Lili Yao ◽  
Shelby Haberman ◽  
Daniel F. McCaffrey ◽  
J. R. Lockwood

Sign in / Sign up

Export Citation Format

Share Document