120 Analytical Strategies for Survey Data in Pet Nutrition and Management

2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 63-63
Author(s):  
Sandra L Rodriguez-Zas

Abstract Companion animal researchers have been at the forefront of using survey methodologies to study dogs’ and cats’ dietary and health patterns in the general population. The reporting of survey results has increased in recent years, facilitated by the rise in internet access, the modest cost of conducting web surveys, and the capability to target surveys to pet owners through address lists collected by services and social media. Data from population surveys have the potential to garner unique and comprehensive information that complements the understanding offered by designed experiments. Recent developments in survey methodologies and the availability of user-friendly survey tools enable the collection of large-scale or even Big Data sets, not only in the number of survey responses but also in the number and type of variables measured. Irrespective of the sample size, the study of survey data necessitates the consideration of complex sampling designs and analysis approaches that reflect the nature of this data. An overview of the characteristics of complex sampling designs typical of survey data with applications to companion animal nutrition is presented. The fundamentals of the analytical approaches that are suitable for survey data are demonstrated, and procedures available to accommodate clustering, stratification, underrepresentation, and nonresponse are reviewed. Examples of survey data visualization and analysis strategies are presented.

2021 ◽  
Author(s):  
Aja Louise Murray ◽  
Anastasia Ushakova ◽  
Helen Wright ◽  
Tom Booth ◽  
Peter Lynn

Complex sampling designs involving features such as stratification, cluster sampling, and unequal selection probabilities are often used in large-scale longitudinal surveys to improve cost-effectiveness and ensure adequate sampling of small or under-represented groups. However, complex sampling designs create challenges when there is a need to account for non-random attrition; a near inevitability in social science longitudinal studies. In this article we discuss these challenges and demonstrate the application of weighting approaches to simultaneously account for non-random attrition and complex design in a large UK-population representative survey. Using an auto-regressive latent trajectory model with structured residuals (ALT-SR) to model the relations between relationship satisfaction and mental health in the Understanding Society study as an example, we provide guidance on implementation of this approach in both R and Mplus is provided. Two standard error estimation approaches are illustrated: pseudo-maximum likelihood robust estimation and Bootstrap resampling. A comparison of unadjusted and design-adjusted results also highlights that ignoring the complex survey designs when fitting structural equation models can result in misleading conclusions.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Plamen V. Mirazchiyski

AbstractThis paper presents the R Analyzer for Large-Scale Assessments (), a newly developed package for analyzing data from studies using complex sampling and assessment designs. Such studies are, for example, the IEA’s Trends in International Mathematics and Science Study and the OECD’s Programme for International Student Assessment. The package covers all cycles from a broad range of studies. The paper presents the architecture of the package, the overall workflow and illustrates some basic analyses using it. The package is open-source and free of charge. Other software packages for analyzing large-scale assessment data exist, some of them are proprietary, others are open-source. However, is the first comprehensive package, designed for the user experience and has some distinctive features. One innovation is that the package can convert SPSS data from large scale assessments into native data sets. It can also do so for PISA data from cycles prior to 2015, where the data is provided in tab-delimited text files along with SPSS control syntax files. Another feature is the availability of a graphical user interface, which is also written in and operates in any operating system where a full copy of can be installed. The output from any analysis function is written into an MS Excel workbook with multiple sheets for the estimates, model statistics, analysis information and the calling syntax itself for reproducing the analysis in future. The flexible design of allows for the quick addition of new studies, analysis types and features to the existing ones.


Metabolites ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 247 ◽  
Author(s):  
Juan Rodríguez-Coira ◽  
María Delgado-Dolset ◽  
David Obeso ◽  
Mariana Dolores-Hernández ◽  
Guillermo Quintás ◽  
...  

Metabolomics, understood as the science that manages the study of compounds from the metabolism, is an essential tool for deciphering metabolic changes in disease. The experiments rely on the use of high-throughput analytical techniques such as liquid chromatography coupled to mass spectrometry (LC-ToF MS). This hyphenation has brought positive aspects such as higher sensitivity, specificity and the extension of the metabolome coverage in a single run. The analysis of a high number of samples in a single batch is currently not always feasible due to technical and practical issues (i.e., a drop of the MS signal) which result in the MS stopping during the experiment obtaining more than a single sample batch. In this situation, careful data treatment is required to enable an accurate joint analysis of multi-batch data sets. This paper summarizes the analytical strategies in large-scale metabolomic experiments; special attention has been given to QC preparation troubleshooting and data treatment. Moreover, labeled internal standards analysis and their aim in data treatment, and data normalization procedures (intra- and inter-batch) are described. These concepts are exemplified using a cohort of 165 patients from a study in asthma.


1979 ◽  
Vol 16 (1) ◽  
pp. 39-47 ◽  
Author(s):  
Yoram Wind ◽  
David Lerner

The first large-scale marketing study in which individual consumers’ survey responses could be linked to their panel diary recordings is reported. The results, for the margarine category only, indicate correspondence between the two data sets at the aggregate brand share level but great discrepancies at the individual consumer level. Analysis of this discrepancy calls into question the use of survey reports as an indicator of individual purchase in product positioning, segmentation, advertising media and copy research, and concept/product testing.


2019 ◽  
pp. 65-78
Author(s):  
Jeff Evans

Organisations like OECD, IEA (International Association for the Evaluation of Educational Achievement) and the EU are increasingly involved in the production of transnational data. They function as key agencies for changing education and lifelong learning policy, promoting human capital approaches, and ‘governing by data’. I consider their growing role in assessing the efficiency of education and training systems. Particularly important in their organisational strategies are large-scale international performance surveys for school-age pupils, such as OECD’s PISA and IEA’s TIMSS. PISA for Development is addressing the problem that, in low income countries, not all 15 year olds can be surveyed at school. PIAAC, the Project for the International Assessment of Adult Competencies, has so far reported results in 33 countries, in 2013 and 2016. It focuses on three domains considered basic for adults in industrial and ‘knowledge’ economies: namely, literacy, numeracy, and problem solving in technology-rich environments, and on attitudes to and reported use of such skills. It uses electronic administration as a default, complex sampling designs, and statistical modelling to estimate the adult’s skill levels. I raise methodological issues relevant to the valid interpretation of such surveys and locate them in general policy developments, including globalisation.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


Sign in / Sign up

Export Citation Format

Share Document