scholarly journals A Guide to Visualizing Trajectories of Change With Confidence Bands and Raw Data

2021 ◽  
Vol 4 (4) ◽  
pp. 251524592110472
Author(s):  
Andrea L. Howard

This tutorial is aimed at researchers working with repeated measures or longitudinal data who are interested in enhancing their visualizations of model-implied mean-level trajectories plotted over time with confidence bands and raw data. The intended audience is researchers who are already modeling their experimental, observational, or other repeated measures data over time using random-effects regression or latent curve modeling but who lack a comprehensive guide to visualize trajectories over time. This tutorial uses an example plotting trajectories from two groups, as seen in random-effects models that include Time × Group interactions and latent curve models that regress the latent time slope factor onto a grouping variable. This tutorial is also geared toward researchers who are satisfied with their current software environment for modeling repeated measures data but who want to make graphics using R software. Prior knowledge of R is not assumed, and readers can follow along using data and other supporting materials available via OSF at https://osf.io/78bk5/ . Readers should come away from this tutorial with the tools needed to begin visualizing mean trajectories over time from their own models and enhancing those plots with graphical estimates of uncertainty and raw data that adhere to transparent practices in research reporting.

2021 ◽  
Author(s):  
Andrea Howard

This tutorial is aimed at researchers working with repeated measures or longitudinal data who are interested in enhancing their visualizations of model-implied mean-level trajectories plotted over time with confidence bands and raw data. The intended audience is researchers who are already modeling their experimental, observational, or other repeated measures data over time using random effects regression or latent curve modeling, but who lack a comprehensive guide to visualize trajectories over time. This tutorial uses an example plotting trajectories from two groups, as seen in random effects models that include time × group interactions and latent curve models that regress the latent time slope factor onto a grouping variable. This tutorial is also geared toward researchers who are satisfied with their current software environment for modeling repeated measures data but who want to make graphics using R software. Prior knowledge of R is not assumed, and readers can follow along using data and other supporting materials available via OSF at https://osf.io/78bk5/. Readers should come away from this tutorial with the tools needed to begin visualizing mean trajectories over time from their own models and enhancing those plots with graphical estimates of uncertainty and raw data that adhere to transparent practices in research reporting.


2007 ◽  
Vol 16 (5) ◽  
pp. 387-397 ◽  
Author(s):  
S. Fieuws ◽  
Geert Verbeke ◽  
G. Molenberghs

2020 ◽  
Vol 54 (1) ◽  
pp. 1-25
Author(s):  
Elmabrok Masaoud ◽  
Henrik Stryhn

The objective of the study was to compare statistical methods for the analysis of binary repeated measures data with an additional hierarchical level. Random effects true models with autocorrelated ($\rho=1$, 0.9 or 0.5) subject random effects were used in this simulation study. The settings of the simulation were chosen to reflect a real veterinary somatic cell count dataset, except that the within--subject time series were balanced, complete and of fixed length (4 or 8 time points). Four fixed effects parameters were studied: binary predictors at the subject and cluster levels, respectively, a linear time effect, and the intercept. The following marginal and random effects statistical procedures were considered: ordinary logistic regression (OLR), alternating logistic regression (ALR), generalized estimating equations (GEE), marginal quasi-likelihood (MQL), penalized quasi-likelihood (PQL), pseudo likelihood (REPL), maximum likelihood (ML) estimation and Bayesian Markov chain Monte Carlo (MCMC). The performance of these estimation procedures was compared specifically for the four fixed parameters as well as variance and correlation parameters. The findings of this study indicate that in data generated by random intercept models ($\rho=1$), the ML and MCMC procedures performed well and had fairly similar estimation errors. The PQL regression estimates were attenuated while the variance estimates were less accurate than ML and MCMC, but the direction of the bias depended on whether binomial or extra-binomial dispersion was assumed. In datasets with autocorrelation ($\rho<1$), random effects estimates procedures gave downwards biased estimates, while marginal estimates were little affected by the presence of autocorrelation. The results also indicate that in addition to ALR, a GEE procedure that accounts for clustering at the highest hierarchical level is sufficient.


2014 ◽  
Vol 30 (4) ◽  
pp. 521-528 ◽  
Author(s):  
Trampas M. TenBroek ◽  
Pedro A. Rodrigues ◽  
Edward C. Frederick ◽  
Joseph Hamill

The purpose of this study was to: (1) investigate how kinematic patterns are adjusted while running in footwear with THIN, MEDIUM, and THICK midsole thicknesses and (2) determine if these patterns are adjusted over time during a sustained run in footwear of different thicknesses. Ten male heel-toe runners performed treadmill runs in specially constructed footwear (THIN, MEDIUM, and THICK midsoles) on separate days. Standard lower extremity kinematics and acceleration at the tibia and head were captured. Time epochs were created using data from every 5 minutes of the run. Repeated-measures ANOVA was used (P< .05) to determine differences across footwear and time. At touchdown, kinematics were similar for the THIN and MEDIUM conditions distal to the knee, whereas only the THIN condition was isolated above the knee. No runners displayed midfoot or forefoot strike patterns in any condition. Peak accelerations were slightly increased with THIN and MEDIUM footwear as was eversion, as well as tibial and thigh internal rotation. It appears that participants may have been anticipating, very early in their run, a suitable kinematic pattern based on both the length of the run and the footwear condition.


Methodology ◽  
2011 ◽  
Vol 7 (4) ◽  
pp. 157-164
Author(s):  
Karl Schweizer

Probability-based and measurement-related hypotheses for confirmatory factor analysis of repeated-measures data are investigated. Such hypotheses comprise precise assumptions concerning the relationships among the true components associated with the levels of the design or the items of the measure. Measurement-related hypotheses concentrate on the assumed processes, as, for example, transformation and memory processes, and represent treatment-dependent differences in processing. In contrast, probability-based hypotheses provide the opportunity to consider probabilities as outcome predictions that summarize the effects of various influences. The prediction of performance guided by inexact cues serves as an example. In the empirical part of this paper probability-based and measurement-related hypotheses are applied to working-memory data. Latent variables according to both hypotheses contribute to a good model fit. The best model fit is achieved for the model including latent variables that represented serial cognitive processing and performance according to inexact cues in combination with a latent variable for subsidiary processes.


Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


Methodology ◽  
2006 ◽  
Vol 2 (1) ◽  
pp. 24-33 ◽  
Author(s):  
Susan Shortreed ◽  
Mark S. Handcock ◽  
Peter Hoff

Recent advances in latent space and related random effects models hold much promise for representing network data. The inherent dependency between ties in a network makes modeling data of this type difficult. In this article we consider a recently developed latent space model that is particularly appropriate for the visualization of networks. We suggest a new estimator of the latent positions and perform two network analyses, comparing four alternative estimators. We demonstrate a method of checking the validity of the positional estimates. These estimators are implemented via a package in the freeware statistical language R. The package allows researchers to efficiently fit the latent space model to data and to visualize the results.


Sign in / Sign up

Export Citation Format

Share Document