scholarly journals An introduction to the many-facet Rasch model as a method to improve observational quality measures with an application to measuring the teaching of emotion skills

2021 ◽  
Vol 55 ◽  
pp. 149-164
Author(s):  
Rachel A. Gordon ◽  
Fang Peng ◽  
Timothy W. Curby ◽  
Katherine M. Zinsser
2010 ◽  
Vol 42 (4) ◽  
pp. 944-956 ◽  
Author(s):  
Michelangelo Vianello ◽  
Egidio Robusto

2008 ◽  
Vol 17 (3) ◽  
pp. 47-68 ◽  
Author(s):  
Jason E. Chapman ◽  
Ashli J. Sheidow ◽  
Scott W. Henggeler ◽  
Colleen A. Halliday-Boykins ◽  
Phillippe B. Cunningham

2018 ◽  
Vol 122 (2) ◽  
pp. 748-772 ◽  
Author(s):  
Wen-Ta Tseng ◽  
Tzi-Ying Su ◽  
John-Michael L. Nix

This study applied the many-facet Rasch model to assess learners’ translation ability in an English as a foreign language context. Few attempts have been made in extant research to detect and calibrate rater severity in the domain of translation testing. To fill the research gap, this study documented the process of validating a test of Chinese-to-English sentence translation and modeled raters’ scoring propensity defined by harshness or leniency, expert/novice effects on severity, and concomitant effects on item difficulty. Two hundred twenty-five, third-year senior high school Taiwanese students and six educators from tertiary and secondary educational institutions served as participants. The students’ mean age was 17.80 years ( SD = 1.20, range 17–19). The exam consisted of 10 translation items adapted from two entrance exam tests. The results showed that this subjectively scored performance assessment exhibited robust unidimensionality, thus reliably measuring translation ability free from unmodeled disturbances. Furthermore, discrepancies in ratings between novice and expert raters were also identified and modeled by the many-facet Rasch model. The implications for applying the many-facet Rasch model in translation tests at the tertiary level were discussed.


2017 ◽  
Vol 22 (3) ◽  
pp. 377-393 ◽  
Author(s):  
D. Gregory Springer ◽  
Kelly D. Bradley

Prior research indicates mixed findings regarding the consistency of adjudicators’ ratings at large ensemble festivals, yet the results of these festivals have strong impacts on the perceived success of instrumental music programs and the perceived effectiveness of their directors. In this study, Rasch modeling was used to investigate the potential influence of adjudicators on performance ratings at a live large ensemble festival. Evaluation forms from a junior high school concert band festival adjudicated by a panel of three expert judges were analyzed using the Many-Facets Rasch Model. Analyses revealed several trends. First, the use of assigning “half points” between adjacent response options on the 5-point rating scale resulted in redundancy and measurement noise. Second, adjudicators provided relatively similar ratings for conceptually distinct criteria, which could be evidence of a halo effect. Third, although all judges demonstrated relatively lenient ratings overall, one judge provided more severe ratings as compared to peers. Finally, an exploratory interaction analysis among the facets of judges and bands indicated the presence of rater-mediated bias. Implications for music researchers and ensemble adjudicators are discussed in the context of ensemble performance evaluations, and a measurement framework that can be applied to other aspects of music performance evaluations is introduced.


2012 ◽  
Vol 73 (3) ◽  
pp. 386-411 ◽  
Author(s):  
Pamela K. Kaliski ◽  
Stefanie A. Wind ◽  
George Engelhard ◽  
Deanna L. Morgan ◽  
Barbara S. Plake ◽  
...  

2015 ◽  
Vol 43 (2) ◽  
pp. 299-316 ◽  
Author(s):  
Sonia Ferreira Lopes Toffoli ◽  
Dalton Francisco de Andrade ◽  
Antonio Cezar Bornia
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document