Behaviormetrika
Latest Publications


TOTAL DOCUMENTS

680
(FIVE YEARS 79)

H-INDEX

20
(FIVE YEARS 4)

Published By Springer-Verlag

1349-6964, 0385-7417

2021 ◽  
Author(s):  
Enrico Ciavolino ◽  
Lucrezia Ferrante ◽  
Giovanna Alessia Sternativo ◽  
Jun-Hwa Cheah ◽  
Simone Rollo ◽  
...  

AbstractThis study examined the factor structure and model specifications of the Interaction Anxiousness Scale (IAS) with confirmatory composite analysis (CCA) using partial least squares-structural equation modeling (PLS-SEM) with a sample of Italian adolescents ($$n = 764$$ n = 764 ). The CCA and PLS-SEM results identified the reflective nature of the IAS sub-scale scores, supporting an alternative measurement model of the IAS scores as a second-order reflective–reflective model.


2021 ◽  
Author(s):  
David Goretzko ◽  
Markus Bühner

AbstractReplicability has become a highly discussed topic in psychological research. The debates focus mainly on significance testing and confirmatory analyses, whereas exploratory analyses such as exploratory factor analysis are more or less ignored, although hardly any analysis has a comparable impact on entire research areas. Determining the correct number of factors for this analysis is probably the most crucial, yet ambiguous decision—especially since factor structures have often been not replicable. Hence, an approach based on bootstrapping the factor retention process is proposed to evaluate the robustness of factor retention criteria against sampling error and to predict whether a particular factor solution may be replicable. We used three samples of the “Big Five Structure Inventory” and four samples of the “10 Item Big Five Inventory” to illustrate the relationship between stable factor solutions across bootstrap samples and their replicability. In addition, we compared four factor retention criteria and an information criterion in terms of their stability on the one hand and their replicability on the other. Based on this study, we want to encourage researchers to make use of bootstrapping to assess the stability of the factor retention criteria they use and to compare these criteria with regard to this stability as a proxy for possible replicability.


2021 ◽  
Author(s):  
Wim J. van der Linden

AbstractConstrained adaptive testing is reviewed as an instance of discrete maximization with the shadow-test approach delivering its solution. The approach may look counterintuitive in that it assumes sequential assembly of full test forms as its basic operation. But it always produces real-time solutions that are optimal and satisfy the set of specifications in effect for the test. Equally importantly, it can be used to run testing programs with different degrees of adaptation for the same set of specifications and/or as a tool to manage programs with simultaneous processes as adaptive item calibration, time management, and/or item-security monitoring.


2021 ◽  
Author(s):  
Masaki Uto

AbstractPerformance assessment, in which human raters assess examinee performance in a practical task, often involves the use of a scoring rubric consisting of multiple evaluation items to increase the objectivity of evaluation. However, even when using a rubric, assigned scores are known to depend on characteristics of the rubric’s evaluation items and the raters, thus decreasing ability measurement accuracy. To resolve this problem, item response theory (IRT) models that can estimate examinee ability while considering the effects of these characteristics have been proposed. These IRT models assume unidimensionality, meaning that a rubric measures one latent ability. In practice, however, this assumption might not be satisfied because a rubric’s evaluation items are often designed to measure multiple sub-abilities that constitute a targeted ability. To address this issue, this study proposes a multidimensional IRT model for rubric-based performance assessment. Specifically, the proposed model is formulated as a multidimensional extension of a generalized many-facet Rasch model. Moreover, a No-U-Turn variant of the Hamiltonian Markov chain Monte Carlo algorithm is adopted as a parameter estimation method for the proposed model. The proposed model is useful not only for improving the ability measurement accuracy, but also for detailed analysis of rubric quality and rubric construct validity. The study demonstrates the effectiveness of the proposed model through simulation experiments and application to real data.


Sign in / Sign up

Export Citation Format

Share Document