scholarly journals Construct Validity and Generalizability of Simulation-Based Objective Structured Clinical Examination Scenarios

2014 ◽  
Vol 6 (3) ◽  
pp. 489-494 ◽  
Author(s):  
Avner Sidi ◽  
Nikolaus Gravenstein ◽  
Samsun Lampotang

Abstract Background It is not known if construct-related validity (progression of scores with different levels of training) and generalizability of Objective Structured Clinical Examination (OSCE) scenarios previously used with non-US graduating anesthesiology residents translate to a US training program. Objective We assessed for progression of scores with training for a validated high-stakes simulation-based anesthesiology examination. Methods Fifty US anesthesiology residents in postgraduate years (PGYs) 2 to 4 were evaluated in operating room, trauma, and resuscitation scenarios developed for and used in a high-stakes Israeli Anesthesiology Board examination, requiring a score of 70% on the checklist for passing (including all critical items). Results The OSCE error rate was lower for PGY-4 than PGY-2 residents in each field, and for most scenarios within each field. The critical item error rate was significantly lower for PGY-4 than PGY-3 residents in operating room scenarios, and for PGY-4 than PGY-2 residents in resuscitation scenarios. The final pass rate was significantly higher for PGY-3 and PGY-4 than PGY-2 residents in operating room scenarios, and also was significantly higher for PGY-4 than PGY-2 residents overall. PGY-4 residents had a better error rate, total scenarios score, general evaluation score, critical items error rate, and final pass rate than PGY-2 residents. Conclusions The comparable error rates, performance grades, and pass rates for US PGY-4 and non-US (Israeli) graduating (PGY-4 equivalent) residents, and the progression of scores among US residents with training level, demonstrate the construct-related validity and generalizability of these high-stakes OSCE scenarios.

2020 ◽  
Vol 17 (2) ◽  
pp. 55-59
Author(s):  
Daniel Ojuka ◽  
Nyaim Elly ◽  
Kiptoon Dan ◽  
Ndaguatha Peter

Background: Examination methods change over time, and audits are useful for quality assurance and improvement. Objective: Comparison of traditional clinical test and objective structured clinical examination (OSCE) in a department of surgery. Methods: Examination records of results of the fifth year MBChB examinations for 2012–2013 (traditional) and 2014–2015 (OSCE) were analyzed. Using 50% as the pre-agreed pass mark, the pass rate for the clinical examinations in each year was calculated and these figures were subjected to t-test to determine any significant differences in each year and in type of clinical test. P value of <0.05 determined significant statistical differences in the test score. Results: We analyzed 1178 results; most (55.6%) did OSCE. The average clinical scores examinations were 59.7% for traditional vs 60.1% for OSCE examination; basic surgical skills were positively skewed. Conclusion: OSCE in the same setting of teaching and examiners may give more marks than the traditional clinical examination, but it is better at detecting areas of inadequacies for emphasis in teaching. Keywords: Clinical examination, Traditional, OSCE, Comparison


2009 ◽  
Vol 123 (10) ◽  
pp. 1155-1159 ◽  
Author(s):  
A B Drake-Lee ◽  
D Skinner ◽  
M Hawthorne ◽  
R Clarke ◽  

AbstractContext:‘High stakes’ postgraduate medical examinations should conform to current educational standards. In the UK and Ireland, national assessments in surgery are devised and managed through the examination structure of the Royal Colleges of Surgeons. Their efforts are not reported in the medical education literature. In the current paper, we aim to clarify this process.Objectives:To replace the clinical section of the Diploma of Otorhinolaryngology with an Objective, Structured, Clinical Examination, and to set the level of the assessment at one year of postgraduate training in the specialty.Methods:After ‘blueprinting’ against the whole curriculum, an Objective, Structured, Clinical Examination comprising 25 stations was divided into six clinical stations and 19 other stations exploring written case histories, instruments, test results, written communication skills and interpretation skills. The pass mark was set using a modified borderline method and other methods, and statistical analysis of the results was performed.Results:The results of nine examinations between May 2004 and May 2008 are presented. The pass mark varied between 68 and 82 per cent. Internal consistency was good, with a Cronbach's α value of 0.99 for all examinations and split-half statistics varying from 0.96 to 0.99. Different standard settings gave similar pass marks.Conclusions:We have developed a summative, Objective, Structured, Clinical Examination for doctors training in otorhinolaryngology, reported herein. The objectives and standards of setting a high quality assessment were met.


2021 ◽  
Vol 268 ◽  
pp. 507-513
Author(s):  
Catalina Ortiz ◽  
Francisca Belmar ◽  
Rolando Rebolledo ◽  
Javier Vela ◽  
Caterina Contreras ◽  
...  

2015 ◽  
Vol 37 (4) ◽  
pp. 305-312
Author(s):  
Ayumi ANAN ◽  
Yuki NAGAMATSU ◽  
Satoko CHOU ◽  
Aki SATO ◽  
Chieko MATSUOKA ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document