Understanding and Developing Rubrics for Music Performance Assessment

2012 ◽  
Vol 98 (3) ◽  
pp. 36-42 ◽  
Author(s):  
Brian C. Wesolowski

A primary difficulty with music performance assessment is managing its subjective nature. To help improve objectivity, rubrics can be used to develop a set of guidelines for clearly assessing student performance. Moreover, rubrics serve as documentation for student achievement that provides music teachers with a written form of accountability. This article examines the complexities of music performance assessment and provides an argument for the benefit of rubrics in the assessment process. In addition, discussion includes an overview of the various types of rubrics as well as suggestions for choosing and writing rubrics to assess musical performances.

Author(s):  
Daniel Massoth

When technology is used for assessment in music, certain considerations can affect the validity, reliability, and depth of analysis. This chapter explores factors that are present in the three phases of the assessment process: recognition, analysis, and display of assessment of a musical performance. Each phase has inherent challenges embedded within internal and external factors. The goal here is not to provide an exhaustive analysis of any or all aspects of assessment but, rather, to present the rationale for and history of using technology in music assessment and to examine the philosophical and practical considerations. A discussion of possible future directions of product research and development concludes the chapter.


2016 ◽  
Vol 45 (3) ◽  
pp. 375-399 ◽  
Author(s):  
Brian C. Wesolowski

This manuscript sought to investigate rater cognition by exploring rater types based upon differential severity and leniency associated with rating scale items, rating scale category functioning, and dimensions of music performance assessment. The purpose of this study was to empirically identify typologies of operational raters based upon systematic differential severity indices in the context of large ensemble music performance assessment. A rater cognition information-processing model was explored based upon two frameworks: a framework for scoring and a framework for audition. Rater scoring behavior was examined using a framework for scoring, where raters’ mental processes compare auditory images to the scoring criteria used to generate a scoring decision. The scoring decisions were evaluated using the Multifaceted Rasch Partial Credit Measurement Model. A rater typology was then examined under the framework of audition, where similar schemata were defined through raters’ clustering of differential severity indices related to items and compared across performance dimensions. The results provided three distinct rater-types: (a) the syntactical rater; (b) the expressive rater; and (c) the mental representation rater. Implications for fairness and precision in the assessment process are discussed as well as considerations for validity of scoring processes.


2004 ◽  
Vol 21 (1) ◽  
pp. 111-125 ◽  
Author(s):  
Diana Blom ◽  
Kim Poole

This paper discusses a project in which third-year undergraduate Performance majors were asked to assess their second-year peers. The impetus for launching the project came from some stirrings of discontent amongst a few students. Instead of finding the assessment of their peers a manageable task, most students found the breadth of musical focus, across a diverse range of musical styles on a wide range of instruments, daunting and difficult. Despite this, students and staff believed the task had proved valuable for learning about the assessment process itself and for understanding the performance process.


2016 ◽  
Vol 33 (5) ◽  
pp. 662-678 ◽  
Author(s):  
Brian C. Wesolowski ◽  
Stefanie A. Wind ◽  
George Engelhard

The use of raters as a methodological tool to detect significant differences in performances and as a means to evaluate music performance achievement is a solidly defended practice in music psychology, education, and performance science research. However, psychometric concerns exist in raters’ precision in the use of task-specific scoring criteria. A methodology for managing rater quality in rater-mediated assessment practices has not been systematically developed in the field of music. The purpose of this study was to examine rater precision through the analysis of rating scale category structure across a set of raters and items within the context of large-group music performance assessment using a Multifaceted Rasch Partial Credit (MFR-PC) Measurement Model. Allowing for separate parameterization estimation of the rating scale for each rater can more clearly detect variability in rater judgment and improve model-data fit, thereby enhancing objectivity, fairness, and precision of rating quality in the music assessment process. Expert judges (N = 23) rated a set of four recordings by middle school, high school, collegiate, and professional jazz big bands. A single common expert rater evaluated all 24 jazz ensemble performances. The data suggest that raters significantly vary in severity, items significantly vary in difficulty, and rating scale category structure significantly varies across raters. Implications for the improvement and management of rater quality in music performance assessment are provided.


2020 ◽  
Vol 12 (2-2) ◽  
Author(s):  
Nurul Akmar Said

Assessment is part of the teaching and learning process, aimed at improving student performance. Assessment also improves the quality of education, which enhances lifelong learning skills and performance in the educational context. Evaluation is a form of process of measuring the assessment process. It is a systematic process that involves the collection, analysis and translation of student achievement levels into the teaching objectives. Today, alternative assessments implemented in the learning system are a form of assessment that complements conventional assessment. Alternative assessments focus on student growth over time as well as the final process. And finally there are four categories of performance assessment namely response building, product evaluation, performance assessment and process focus assessment. Each category comes with examples of activities to help teachers teach and learn in the classroom


Author(s):  
M. Christina Schneider ◽  
Jennifer S. McDonel ◽  
Charles A. DePascale

All music educators need training regarding how to create high-quality performance-based assessments and corresponding rubrics to (1) measure student learning in the classroom, (2) compare and rank students in an audition context, and (3) respond to and support student learning. The purpose of this chapter is to show that content standards and assessments, together, define intended outcomes of student learning. Teachers and assessment developers must determine the purpose for the assessment and the desired inferences regarding a student’s understanding or skills as the first step in the assessment process. Upon determining the desired inference, teachers and assessment developers must center the creation of assessments on the answers to three key questions: What knowledge, skills, or other attributes of student performance should be assessed? What evidence will demonstrate those knowledge and skills? What tasks will elicit those evidence pieces from students?


Author(s):  
Nugroho Budhiwaluyo ◽  
Rayandra Asyhar ◽  
Bambang Hariyadi

  This research aims to produce a final product in the form of a performance-assessment instrument on Cell Structure and Function experiment. The development model is ADDIE. Based on expert's judgment, the instrument was valid and can be tested in the field. Field-test results shown that the product performs high validity and reliability value on measuring student performance on Cell Structure and Function experiment. Therefore, it is concluded that this performance-assessment instrument theoretically and practically has a good quality for measuring student performance in both process and product performance on Cell Structure and Function experiment. Keywords: Development, Performance-Assessment Instrument, Cell Structure and Function Experiment 


2021 ◽  
Vol 12 (1) ◽  
pp. 172-190
Author(s):  
Rháleff N. R. Oliveira ◽  
Rafaela V. Rocha ◽  
Denise H. Goya

Serious Games (SGs) are used to support knowledge acquisition and skill development. For this, there is a need to measure the results achieved (both during and after students play) to ensure the game effectiveness. In this context, the aim is to develop and evaluate the AvaliaJS, a conceptual model to structure, guide and support the planning of the design and execution of the student's performance assessment in SGs. AvaliaJS has two artifacts: a canvas model, for high-level planning, and an assessment project document, for more detailed specifications of the canvas. To analyze and exemplify the use of the model, the artifacts were applied to three ready-made games as a proof of concept. In addition, the quality of AvaliaJS was evaluated by experts in SGs development and assessment using a questionnaire. The results of experts' answers confirm a good internal consistency (Cronbach's alpha α = 0.87) which indicates that AvaliaJS is correct, authentic, consistent, clear, unambiguous and flexible. However, the model will need to be validated during the process of creating a new game to ensure its usability and efficiency. In general, AvaliaJS can be used to support the team in the planning, documentation and development of artifacts and data collection in SGs, as well as in the execution of the assessment, learning measurement and constant and personalized feedback for students.


Sign in / Sign up

Export Citation Format

Share Document