Data-Driven Student Knowledge Assessment through Ill-Defined Procedural Tasks

Author(s):  
Jaime Gálvez ◽  
Eduardo Guzmán ◽  
Ricardo Conejo
2020 ◽  
Vol 12 (02) ◽  
pp. e234-e238
Author(s):  
Isdin Oke ◽  
Steven D. Ness ◽  
Jean E. Ramsey ◽  
Nicole H. Siegel ◽  
Crandall E. Peeler

Abstract Introduction Residency programs receive an institutional keyword report following the annual Ophthalmic Knowledge Assessment Program (OKAP) examination containing the raw number of incorrectly answered questions. Programs would benefit from a method to compare relative performance between subspecialty sections. We propose a technique of normalizing the keyword report to determine relative subspecialty strengths and weaknesses in trainee performance. Methods We retrospectively reviewed our institutional keyword reports from 2017 to 2019. We normalized the percentage of correctly answered questions for each postgraduate year (PGY) level by dividing the percent of correctly answered questions for each subspecialty by the percent correct across all subsections for that PGY level. We repeated this calculation for each PGY level in each subsection for each calendar year of analysis. Results There was a statistically significant difference in mean performance between the subspecialty sections (p = 0.038). We found above average performance in the Uveitis and Ocular Inflammation section (95% confidence interval [CI]: 1.02–1.18) and high variability of performance in the Clinical Optics section (95% CI: 0.76–1.34). Discussion The OKAP institutional keyword reports are extremely valuable for residency program self-evaluation. Performance normalized for PGY level and test year can reveal insightful trends into the relative strengths and weaknesses of trainee knowledge and guide data-driven curriculum improvement.


2021 ◽  
Vol 4 (2) ◽  
pp. 79
Author(s):  
Ulker Shafiyeva

Standardized tests have been applied as student knowledge assessment in many countries, including Azerbaijan. However, studies have shown that standardized tests are not an effective way of measuring students' knowledge because they limit students' creativity and prevent instructors from applying individual teaching methods due to the pressure of passing the tests. The tests do not consider students with different learning abilities and treat them equally, which may not favor some students. Also, teachers are pressured to ensure their students pass the tests, leading to an excessive focus on the topics likely to be set rather than the whole curriculum. The study recommends implementing different assessment methods with no ranking to ensure that students do not memorize for passing tests, eliminate competition, and promote equality in the education sector. The assessment methods should allow students to debate, compare and analyze ideas through critical thinking, inquiring, and understanding for applying the learned knowledge into real life. Thus, the importance of an inquiry-based curriculum and assessment is stressed.


1993 ◽  
Author(s):  
Steven A. Harp ◽  
Tariq Samad ◽  
Michael Villano

Sign in / Sign up

Export Citation Format

Share Document