scholarly journals CREATION OF A TEST BATTERY FOR THE EVALUATION OF RHYTHMIC FEELINGS IN UNIVERSITY STUDENTS IN THE FIELD OF PHYSICAL EDUCATION AND SPORT

2021 ◽  
Vol 13 (2) ◽  
Author(s):  
Alena Kašparová ◽  
Kateřina Doležalová ◽  
Viléma Novotná

Optimal movement rhythmisation is considered one of the basic prerequisites for improvements in the quality of movement performance using a particular technique. Well-developed rhythm-movement patterns play a role in successful learning of various physical activities as well as in athletic performance. University students – future PE and sports teachers – should improve their rhythmic feel skills during their studies so that they can use them later in their work and develop them in their future students. This requires the creation of a test battery for the evaluation of rhythmic feel skills through a series of music tests. This paper presents the results of tests taken by 121 university students at UK FTVS in Prague, the Czech Republic, and AWFIS in Gdaňsk, Poland. The test battery focused on three types of music-motor skills: perception skills and activities (items 1-18), reproduction skills and activities (items 19-27) and production skills and activities (item 28). The data were statistically processed using the classical test theory (factor analysis) and the item response theory (two-parameter model). Statistical methods also included reliability calculation and test validity. The expected rejection of the proposed hypothesis was confirmed both for the classical test theory and for the item response theory. The only exception was model 4 where, however, fit indices (especially TLI = 0.537) pointed more at a lack of evidence for hypothesis rejection than a perfect conformity of the model and data. The intention was to create and test models with the best data compliance. The best data compliance was found in models no. 1 and 5. Model 1 [CFI = 0.927, TLI=0.916, SRMR = 0.09, RMSEA (5 %) = 0.03, RMSEA (95 %) = 0.059] had a structure that corresponded to the proposed test battery and showed a relatively good compliance with data although IRT identified several problematic items. Model 5 [CFI = 0.956, TLI=0,942, SRMR = 0.073, RMSEA (5 %) = 0.03, RMSEA (95 %) = 0.111] was unidimensional (reproduction factor feeding items 19 through 27) and its fit indices showed better compliance of model and data. An optimised test battery should be developed based on these models followed by another validation of the test battery using statistical analyses.

Author(s):  
David L. Streiner ◽  
Geoffrey R. Norman ◽  
John Cairney

Over the past few decades, there has been a revolution in the approach to scale development. Called item response theory (IRT), this approach challenges the notion that scales must be long in order to be reliable, and that psychometric properties of a scale derived from one group of people cannot be applied to different groups. This chapter provides an introduction to IRT, and discusses how it can be used to develop scales and to shorten existing scales that have been developed using the more traditional approach of classical test theory. IRT also can result in scales that have interval-level properties, unlike those derived from classical test theory. Further, it allows people to be compared to one another, even though they may have completed different items, allowing for computer-adapted testing. The chapter concludes by discussing the advantages and disadvantages of IRT.


2020 ◽  
Vol 64 (3) ◽  
pp. 219-237
Author(s):  
Brandon LeBeau ◽  
Susan G. Assouline ◽  
Duhita Mahatmya ◽  
Ann Lupkowski-Shoplik

This study investigated the application of item response theory (IRT) to expand the range of ability estimates for gifted (hereinafter referred to as high-achieving) students’ performance on an above-level test. Using a sample of fourth- to sixth-grade high-achieving students ( N = 1,893), we conducted a study to compare estimates from two measurement theories, classical test theory (CTT) and IRT. CTT and IRT make different assumptions about the analysis that impact the reliability and validity of the scores obtained from the test. IRT can also differentiate students based on the student’s grade or within a grade by using the unique string of correct and incorrect answers the student makes while taking the test. This differentiation may have implications for identifying or classifying students who are ready for advanced coursework. An exploration of the differentiation for Math, Reading, and Science tests and the impact the different measurement frameworks can have on classification of students are explored. Implications for academic talent identification with the talent search model and development of academic talent are discussed.


Sign in / Sign up

Export Citation Format

Share Document