Assessment of Learning: Test Design and Administration Factors That Affect Student Performance

2004 ◽  
Author(s):  
Rita J. Czaja ◽  
Scottie Barty
2015 ◽  
Vol 117 (1) ◽  
pp. 1-36
Author(s):  
Maria Araceli Ruiz-Primo ◽  
Min Li

Background A long-standing premise in test design is that contextualizing test items makes them concrete, less demanding, and more conducive to determining whether students can apply or transfer their knowledge. Purpose We assert that despite decades of study and experience, much remains to be learned about how to construct effective and fair test items with contexts. Too little is known about how item contexts can be appropriately constructed and used, and even less about the relationship between context characteristics and student performance. The exploratory study presented in this paper seeks to contribute to knowledge about test design and construction by focusing on this gap. Research Design We address two key questions: (a) What are the characteristics of contexts used in the PISA science items? and (b) What are the relationships between different context characteristics and student performance? We propose a profiling approach to capture information about six context dimensions: type of context, context role, complexity, resources, level of abstraction, and connectivity. To test the approach empirically we sampled a total of 52 science items from PISA 2006 and 2009. We describe the context characteristics of the items at two levels (named layers): general (testlet context) and specific (item context). Conclusion We provide empirical evidence about the relationships of these characteristics with student performance as measured by the international percentage of correct responses. We found that the dimension of context resources (e.g., pictures, drawings, photographs) for general contexts and level of abstractness for specific contexts are associated with student performance.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sandra Seno-Alday ◽  
Amanda Budde-Sung

Purpose This paper aims to explore the impact of differences in educational traditions on conventions of teaching and learning, and on the measurement of learning outcomes. These are critical issues within the context of business schools that are steeped in one dominant tradition but have a large population of international students previously educated in other traditions. The paper argues that international students face the challenge of satisfactorily demonstrating learning according to foreign conventions that are different from what they would have been accustomed to within the framework of their home educational tradition. Design/methodology/approach This study draws on a bilingual literature review to capture differences in educational traditions between Australia and China. It then uses logistic regression to analyze the performance of 800 domestic and international Chinese students across a range of different assessment formats at a large Australian business school. Findings The study finds statistically significant differences in the performance of these two student groups on different assessment types. It concludes that the conventions on approaches to the assessment of learning shaped by a specific educational tradition can hamper the effective demonstration of learning among students from other educational traditions. Originality/value The paper focuses on issues related to the assessment of learning in multicultural higher education contexts, which has received less attention in the literature compared to issues on teaching approaches in multicultural contexts. The paper also highlights important implications on the validity of the measurement of learning outcomes and on the subsequent impact on graduate recruitment.


2015 ◽  
Vol 117 (1) ◽  
pp. 1-36
Author(s):  
Maria Araceli Ruiz-Primo ◽  
Min Li

Background A long-standing premise in test design is that contextualizing test items makes them concrete, less demanding, and more conducive to determining whether students can apply or transfer their knowledge. Purpose We assert that despite decades of study and experience, much remains to be learned about how to construct effective and fair test items with contexts. Too little is known about how item contexts can be appropriately constructed and used, and even less about the relationship between context characteristics and student performance. The exploratory study presented in this paper seeks to contribute to knowledge about test design and construction by focusing on this gap. Research Design We address two key questions: (a) What are the characteristics of contexts used in the PISA science items? and (b) What are the relationships between different context characteristics and student performance? We propose a profiling approach to capture information about six context dimensions: type of context, context role, complexity, resources, level of abstraction, and connectivity. To test the approach empirically we sampled a total of 52 science items from PISA 2006 and 2009. We describe the context characteristics of the items at two levels (named layers): general (testlet context) and specific (item context). Conclusion We provide empirical evidence about the relationships of these characteristics with student performance as measured by the international percentage of correct responses. We found that the dimension of context resources (e.g., pictures, drawings, photographs) for general contexts and level of abstractness for specific contexts are associated with student performance.


2014 ◽  
Vol 9 (1) ◽  
pp. 1-13
Author(s):  
Paul A. Hong ◽  
David R. Fordham ◽  
David C. Hayes

ABSTRACT Developed by a graduate student and leveraging his experience with a family owned business, this case provides an interesting scenario addressing – and integrating – a number of factors relating to fraud, including incompatible duties, the fraud triangle, psychological factors, and “red flags”. Results from an under-graduate AIS course support the case's efficacy as a student assignment. Student performance on the case closely corresponded with other assessment of learning measures, supporting the use of the case for evaluation purposes. The case also increased student interest and motivation. Most importantly, use of the case resulted in measured enhancement of students' abilities to recognize problems in an accounting system. A spreadsheet was developed to increase the efficiency of grading the case.


2020 ◽  
Author(s):  
Anne Habedank ◽  
Pia Kahnau ◽  
Lars Lewejohann

AbstractIn rodents, the T-maze test is commonly used to investigate spontaneous alternating behaviour but it can also be used to investigate memory, stimuli discrimination or preference between goods. However, especially regarding T-maze preference tests there is no recommended protocol and researchers frequently report reproduction difficulties of this test using mice.Here, we aimed to develop an efficient protocol with female C57BL/6J mice, conducting two preference tests with different design: In a first test, on two consecutive days with five trials, thirteen mice had to choose between two fluids. In a second preference test, on five consecutive days with two (week 1) or three (week 2) trials, twelve mice had to choose between one arm containing bedding mixed with millet and one containing only bedding. This test design resembled a simple learning test (learn where to find the rewarded and the unrewarded arm on the basis of spatial, olfactory and visual cues).In both experiments, mice took only a few seconds per trial to run the maze and make their choice. However, in both experiments mice failed to show any preference for one of the arms. Instead, they alternated choices. We therefore believe the T-maze test to be rather unsuitable to test preference or learning behaviour with C57BL/6J mice.


2017 ◽  
Vol 14 (3) ◽  
pp. 5-18
Author(s):  
Don Houston ◽  
◽  
James N. Thompson ◽  

Discussions about the relationships between formative and summative assessment have come full circle after decades of debate. For some time formative assessment with its emphasis on feedback to students was promoted as better practice than traditional summative assessment. Summative assessment practices were broadly criticised as distanced from the learning process. More recently discussions have refocused on the potential complementary characteristics of formative and summative purposes of assessment. However studies on practical designs to link formative and summative assessment in constructive ways are rare. In paramedic education, like many other professional disciplines, strong traditions of summative assessment - assessment ‘of’ learning - have long dominated. Communities require that a graduate has been judged fit to practice. The assessment redesign described and evaluated in this paper sought to rebalance assessment relationships in a capstone paramedic subject to integrate formative assessment for learning with summative assessment of learning. Assessment was repositioned as a communication process about learning. Through a variety of frequent assessment events, judgement of student performance is accompanied with rich feedback. Each assessment event provides information about learning, unique to each student’s needs. Each assessment event shaped subsequent assessment events. Student participants in the formal evaluation of the subject indicated high levels of perceived value and effectiveness on learning across each of the assessment events, with broad agreement also demonstrated relating to student perceptions for preparedness: ‘readiness to practice’. Our approach focused on linking assessment events, resulted in assessments providing formative communication to students and summative outcome information to others simultaneously. The formative-summative dichotomy disappeared: all assessment became part of communication about learning.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 64-70 ◽  
Author(s):  
Barbara E.C. Knollmann-Ritschel ◽  
Steven J. Durning

ABSTRACT Medical school education has traditionally been driven by single discipline teaching and assessment. Newer medical school curricula often implement an organ-based approach that fosters integration of basic science and clinical disciplines. Concept maps are widely used in education. Through diagrammatic depiction of a variety of concepts and their specific connections with other ideas, concept maps provide a unique perspective into learning and performance that can complement other assessment methods commonly used in medical schools. In this innovation, we describe using concepts maps as a vehicle for a modified a classic Team-Based Learning (TBL) exercise. Modifications to traditional TBL in our innovation included replacing an individual assessment using multiple-choice questions with concept maps as well as combining the group assessment and application exercise whereby teams created concept maps. These modifications were made to further assess understanding of content across the Fundamentals module (the introductory module of the preclerkship curriculum). While preliminary, student performance and feedback from faculty and students support the use of concept maps in TBL. Our findings suggest concept maps can provide a unique means of determining assessment of learning and generating feedback to students. Concept maps can also demonstrate knowledge acquisition, organization of prior and new knowledge, and synthesis of that knowledge across disciplines in a unique way providing an additional means of assessment in addition to traditional multiple-choice questions.


Sign in / Sign up

Export Citation Format

Share Document