high stakes assessment
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 21)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
Vol 184 ◽  
pp. 111190
Author(s):  
Rosa Novo ◽  
Bárbara Gonzalez ◽  
Magda Roberto

2021 ◽  
Author(s):  
Nazdar E. Alkhateeb ◽  
Ali Al-Dabbagh ◽  
Yaseen Mohammed ◽  
Mohammed Ibrahim

Any high-stakes assessment that leads to an important decision requires careful consideration in determining whether a student passes or fails. Despite the implementation of many standard-setting methods in clinical examinations, concerns remain about the reliability of pass/fail decisions in high stakes assessment, especially clinical assessment. This observational study proposes a defensible pass/fail decision based on the number of failed competencies. In the study conducted in Erbil, Iraq, in June 2018, results were obtained for 150 medical students on their final objective structured clinical examination. Cutoff scores and pass/fail decisions were calculated using the modified Angoff, borderline, borderline-regression, and holistic methods. The results were compared with each other and with a new competency method using Cohen’s kappa. Rasch analysis was used to compare the consistency of competency data with Rasch model estimates. The competency method resulted in 40 (26.7%) students failing, compared with 76 (50.6%), 37 (24.6%), 35 (23.3%), and 13 (8%) for the modified Angoff, borderline, borderline regression, and holistic methods, respectively. The competency method demonstrated a sufficient degree of fit to the Rasch model (mean outfit and infit statistics of 0.961 and 0.960, respectively). In conclusion, the competency method was more stringent in determining pass/fail, compared with other standard-setting methods, except for the modified Angoff method. The fit of competency data to the Rasch model provides evidence for the validity and reliability of pass/fail decisions.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Mary Richardson ◽  
Rose Clesham

Our world has been transformed by technologies incorporating artificial intelligence (AI) within mass communication, employment, entertainment and many other aspects of our daily lives. However, within the domain of education, it seems that our ways of working and, particularly, assessing have hardly changed at all. We continue to prize examinations and summative testing as the most reliable way to assess educational achievements, and we continue to rely on paper-based test delivery as our modus operandi. Inertia, tradition and aversion to perceived risk have resulted in a lack of innovation (James, 2006), particularly so in the area of high-stakes assessment. The summer of 2020 brought this deficit into very sharp focus with the A-level debacle in England, where grades were awarded, challenged, rescinded and reset. These events are potentially catastrophic in terms of how we trust national examinations, and the problems arise from using just one way to define academic success and one way to operationalize that approach to assessment. While sophisticated digital learning platforms, multimedia technologies and wireless communication are transforming what, when and how learning can take place, transformation in national and international assessment thinking and practice trails behind. In this article, we present some of the current research and advances in AI and how these can be applied to the context of high-stakes assessment. Our discussion focuses not on the question of whether we should be using technologies, but on how we can use them effectively to better support practice. An example from one testing agency in England using a globally popular test of English that assesses oral, aural, reading and written skills is described to explain and propose just how well new technologies can augment assessment theory and practice.


2021 ◽  
Vol 102 (6) ◽  
pp. 38-43
Author(s):  
Drew H. Gitomer ◽  
José Felipe Martínez ◽  
Dan Battey

In 2019, Drew H. Gitomer, Dan Battey, and José Felipe Martínez published a paper detailing apparent violations of fundamental principles and norms in the reporting of technical information about the edTPA, a widely used high-stakes assessment for teacher licensure. In this article, they describe and criticize the lack of appropriate response to these concerns by state and professional institutions. Without such institutions providing guardrails to ensure professionally ethical practices, protections are compromised for those most directly affected by assessments. This is a story about edTPA but has implications for assessment more broadly.


Author(s):  
Cecil R. Reynolds ◽  
Robert A. Altmann ◽  
Daniel N. Allen

Sign in / Sign up

Export Citation Format

Share Document