Methodology
Latest Publications


TOTAL DOCUMENTS

320
(FIVE YEARS 59)

H-INDEX

28
(FIVE YEARS 3)

Published By Hogrefe Publishing Group

1614-2241, 1614-1881

Methodology ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 296-306
Author(s):  
Urbano Lorenzo-Seva ◽  
Pere J. Ferrando

Kaiser’s single-variable measure of sampling adequacy (MSA) is a very useful index for debugging inappropriate items before a factor analysis (FA) solution is fitted to an item-pool dataset for item selection purposes. For reasons discussed in the article, however, MSA is hardly used nowadays in this context. In our view, this is unfortunate. In the present proposal, we first discuss the foundation and rationale of MSA from a ‘modern’ FA view, as well as its usefulness in the item selection process. Second, we embed the index within a robust approach and propose improvements in the preliminary item selection process. Third, we implement the proposal in different statistical programs. Finally, we illustrate its use and advantages with an empirical example in personality measurement.


Methodology ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 271-295
Author(s):  
Fabio Mason ◽  
Eva Cantoni ◽  
Paolo Ghisletta

The linear mixed model (LMM) is a popular statistical model for the analysis of longitudinal data. However, the robust estimation of and inferential conclusions for the LMM in the presence of outliers (i.e., observations with very low probability of occurrence under Normality) is not part of mainstream longitudinal data analysis. In this work, we compared the coverage rates of confidence intervals (CIs) based on two bootstrap methods, applied to three robust estimation methods. We carried out a simulation experiment to compare CIs under three different conditions: data 1) without contamination, 2) contaminated by within-, or 3) between-participant outliers. Results showed that the semi-parametric bootstrap associated to the composite tau-estimator leads to valid inferential decisions with both uncontaminated and contaminated data. This being the most comprehensive study of CIs applied to robust estimators of the LMM, we provide fully commented R code for all methods applied to a popular example.


Methodology ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 250-270
Author(s):  
Peter Boedeker

Modeling growth across repeated measures of individuals and evaluating predictors of growth can reveal developmental patterns and factors that affect those patterns. When growth follows a sigmoidal shape, the Logistic, Gompertz, and Richards nonlinear growth curves are plausible. These functions have parameters that specifically control the starting point, total growth, overall rate of change, and point of greatest growth. Variability in growth parameters across individuals can be explained by covariates in a mixed model framework. The purpose of this tutorial is to provide analysts a brief introduction to these growth curves and demonstrate their application. The 'saemix' package in R is used to fit models to simulated data to answer specific research questions. Enough code is provided in-text to describe how to execute the analyses with the complete code and data provided in Supplementary Materials.


Methodology ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 307-325
Author(s):  
Caroline Keck ◽  
Axel Mayer ◽  
Yves Rosseel

Using the EffectLiteR framework, researchers can test classical null hypotheses about effects of interest via Wald and F-tests, while taking into account the stochastic nature of group sizes. This paper aims at extending EffectLiteR to test informative hypotheses, assuming for example that the average effect of a new treatment is greater than the average effect of an old treatment, which in turn is greater than zero. We present a simulated data example to show two methodological novelties. First, we illustrate how to use the Fbar- and generalized linear Wald test to assess informative hypotheses. While the classical test did not reach significance, the informative test correctly rejected the null hypothesis, indicating the need to take into account the order of the treatment groups. Second, we demonstrate how to account for stochastic group sizes in informative hypotheses using the generalized non-linear Wald statistic. The paper concludes with a short data example.


Methodology ◽  
2021 ◽  
Vol 17 (3) ◽  
pp. 231-249
Author(s):  
Anaïd Lindemann ◽  
Jörg Stolz

The Titanic quantitative dataset has long been used to teach statistics. However, combining the quantitative dataset with a qualitative dataset of survivor testimonies shows that the Titanic case is an even better example to teach mixed methods. This article offers practical tools to teach mixed methods to undergraduate or postgraduate students in the social sciences, using the Titanic datasets. Based on an empirical analysis of the survival probabilities on the Titanic, we show how mixed methods lead to superior explanations than mono-method strategies. This paper has two goals: 1) to introduce the freely available linked Titanic datasets; and 2) to present a three-hour step-by-step exercise with the Titanic datasets that can be used to learn and teach mixed methods.


Methodology ◽  
2021 ◽  
Vol 17 (3) ◽  
pp. 189-204
Author(s):  
Cailey E. Fitzgerald ◽  
Ryne Estabrook ◽  
Daniel P. Martin ◽  
Andreas M. Brandmaier ◽  
Timo von Oertzen

Missing data are ubiquitous in psychological research. They may come about as an unwanted result of coding or computer error, participants' non-response or absence, or missing values may be intentional, as in planned missing designs. We discuss the effects of missing data on χ²-based goodness-of-fit indices in Structural Equation Modeling (SEM), specifically on the Root Mean Squared Error of Approximation (RMSEA). We use simulations to show that naive implementations of the RMSEA have a downward bias in the presence of missing data and, thus, overestimate model goodness-of-fit. Unfortunately, many state-of-the-art software packages report the biased form of RMSEA. As a consequence, the scientific community may have been accepting a much larger fraction of models with non-acceptable model fit. We propose a bias-correction for the RMSEA based on information-theoretic considerations that take into account the expected misfit of a person with fully observed data. The corrected RMSEA is asymptotically independent of the proportion of missing data for misspecified models. Importantly, results of the corrected RMSEA computation are identical to naive RMSEA if there are no missing data.


Methodology ◽  
2021 ◽  
Vol 17 (3) ◽  
pp. 205-230
Author(s):  
Kristian Kleinke ◽  
Markus Fritsch ◽  
Mark Stemmler ◽  
Jost Reinecke ◽  
Friedrich Lösel

Quantile regression (QR) is a valuable tool for data analysis and multiple imputation (MI) of missing values – especially when standard parametric modelling assumptions are violated. Yet, Monte Carlo simulations that systematically evaluate QR-based MI in a variety of different practically relevant settings are still scarce. In this paper, we evaluate the method regarding the imputation of ordinal data and compare the results with other standard and robust imputation methods. We then apply QR-based MI to an empirical dataset, where we seek to identify risk factors for corporal punishment of children by their fathers. We compare the modelling results with previously published findings based on complete cases. Our Monte Carlo results highlight the advantages of QR-based MI over fully parametric imputation models: QR-based MI yields unbiased statistical inferences across large parts of the conditional distribution, when parametric modelling assumptions, such as normal and homoscedastic error terms, are violated. Regarding risk factors for corporal punishment, our MI results support previously published findings based on complete cases. Our empirical results indicate that the identified “missing at random” processes in the investigated dataset are negligible.


Methodology ◽  
2021 ◽  
Vol 17 (3) ◽  
pp. 168-188
Author(s):  
Karl Schweizer ◽  
Dorothea Krampen ◽  
Brian F. French

Rapid guessing is a test taking strategy recommended for increasing the probability of achieving a high score if a time limit prevents an examinee from responding to all items of a scale. The strategy requires responding quickly and without cognitively processing item details. Although there may be no omitted responses after participants' rapid guessing, an open question remains: do the data show unidimensionality, as is expected for data collected by a scale, or bi-dimensionality characterizing data collected with a time limit in testing, speeded data. To answer this question, we simulated speeded and rapid guessing data and performed confirmatory factor analysis using one-factor and two-factor models. The results revealed that speededness was detectable despite the presence of rapid guessing. However, detection may depend on the number of response options for a given set of items.


Methodology ◽  
2021 ◽  
Vol 17 (2) ◽  
pp. 92-110
Author(s):  
Nianbo Dong ◽  
Jessaca Spybrook ◽  
Benjamin Kelcey ◽  
Metin Bulus

Researchers often apply moderation analyses to examine whether the effects of an intervention differ conditional on individual or cluster moderator variables such as gender, pretest, or school size. This study develops formulas for power analyses to detect moderator effects in two-level cluster randomized trials (CRTs) using hierarchical linear models. We derive the formulas for estimating statistical power, minimum detectable effect size difference and 95% confidence intervals for cluster- and individual-level moderators. Our framework accommodates binary or continuous moderators, designs with or without covariates, and effects of individual-level moderators that vary randomly or nonrandomly across clusters. A small Monte Carlo simulation confirms the accuracy of our formulas. We also compare power between main effect analysis and moderation analysis, discuss the effects of mis-specification of the moderator slope (randomly vs. non-randomly varying), and conclude with directions for future research. We provide software for conducting a power analysis of moderator effects in CRTs.


Methodology ◽  
2021 ◽  
Vol 17 (2) ◽  
pp. 149-167
Author(s):  
Mark Stemmler ◽  
Jörg-Henrik Heine ◽  
Susanne Wallner

Configural Frequency Analysis (CFA) is a useful statistical method for the analysis of multiway contingency tables and an appropriate tool for person-oriented or person-centered methods. In complex contingency tables, patterns or configurations are analyzed by comparing observed cell frequencies with expected frequencies. Significant differences between observed and expected frequencies lead to the emergence of Types and Antitypes. Types are patterns or configurations which are significantly more often observed than the expected frequencies; Antitypes represent configurations which are observed less frequently than expected. The R-package confreq is an easy-to-use software for conducting CFAs; another useful shareware to run CFAs was developed by Alexander von Eye. Here, CFA is presented based on the log-linear modeling approach. CFA may be used together with interval level variables which can be added as covariates into the design matrix. In this article, a real data example and the use of confreq are presented. In sum, the use of a covariate may bring the estimated cell frequencies closer to the observed cell frequencies. In those cases, the number of Types or Antitypes may decrease. However, in rare cases, the Type-Antitype pattern can change with new emerging Types or Antitypes.


Sign in / Sign up

Export Citation Format

Share Document