complex sampling designs
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 63-63
Author(s):  
Sandra L Rodriguez-Zas

Abstract Companion animal researchers have been at the forefront of using survey methodologies to study dogs’ and cats’ dietary and health patterns in the general population. The reporting of survey results has increased in recent years, facilitated by the rise in internet access, the modest cost of conducting web surveys, and the capability to target surveys to pet owners through address lists collected by services and social media. Data from population surveys have the potential to garner unique and comprehensive information that complements the understanding offered by designed experiments. Recent developments in survey methodologies and the availability of user-friendly survey tools enable the collection of large-scale or even Big Data sets, not only in the number of survey responses but also in the number and type of variables measured. Irrespective of the sample size, the study of survey data necessitates the consideration of complex sampling designs and analysis approaches that reflect the nature of this data. An overview of the characteristics of complex sampling designs typical of survey data with applications to companion animal nutrition is presented. The fundamentals of the analytical approaches that are suitable for survey data are demonstrated, and procedures available to accommodate clustering, stratification, underrepresentation, and nonresponse are reviewed. Examples of survey data visualization and analysis strategies are presented.


2021 ◽  
pp. 001316442110075
Author(s):  
James Soland

Considerable thought is often put into designing randomized control trials (RCTs). From power analyses and complex sampling designs implemented preintervention to nuanced quasi-experimental models used to estimate treatment effects postintervention, RCT design can be quite complicated. Yet when psychological constructs measured using survey scales are the outcome of interest, measurement is often an afterthought, even in RCTs. The purpose of this study is to examine how choices about scoring and calibration of survey item responses affect recovery of true treatment effects. Specifically, simulation and empirical studies are used to compare the performance of sum scores, which are frequently used in RCTs in psychology and education, to that of approaches rooted in item response theory (IRT) that better account for the longitudinal, multigroup nature of the data. The results from this study indicate that selecting an IRT model that matches the nature of the data can significantly reduce bias in treatment effect estimates and reduce standard errors.


2021 ◽  
Author(s):  
Aja Louise Murray ◽  
Anastasia Ushakova ◽  
Helen Wright ◽  
Tom Booth ◽  
Peter Lynn

Complex sampling designs involving features such as stratification, cluster sampling, and unequal selection probabilities are often used in large-scale longitudinal surveys to improve cost-effectiveness and ensure adequate sampling of small or under-represented groups. However, complex sampling designs create challenges when there is a need to account for non-random attrition; a near inevitability in social science longitudinal studies. In this article we discuss these challenges and demonstrate the application of weighting approaches to simultaneously account for non-random attrition and complex design in a large UK-population representative survey. Using an auto-regressive latent trajectory model with structured residuals (ALT-SR) to model the relations between relationship satisfaction and mental health in the Understanding Society study as an example, we provide guidance on implementation of this approach in both R and Mplus is provided. Two standard error estimation approaches are illustrated: pseudo-maximum likelihood robust estimation and Bootstrap resampling. A comparison of unadjusted and design-adjusted results also highlights that ignoring the complex survey designs when fitting structural equation models can result in misleading conclusions.


2020 ◽  
Vol 2020 (1) ◽  
pp. 1-20
Author(s):  
Lili Yao ◽  
Shelby Haberman ◽  
Daniel F. McCaffrey ◽  
J. R. Lockwood

Sign in / Sign up

Export Citation Format

Share Document