scholarly journals A Pychometric Approach to Setting a Passing Score on Korean National Medical Licensing Examination

Author(s):  
Guemin Lee

National Health Personnel Licensing Examination Board (hereafter NHPLEB) has used 60% correct responses of overall tests and 40% correct responses of each subject area test as a criterion to give physician licenses to satisfactory candidates. The 60%-40% criterion seems reasonable to laypersons without pychometric or measurement knowledge, but it may causes several severe problems on pychometrician's perspective. This paper pointed out several problematic cases that can be encountered by using the 60%-40% criterion, and provided several pychometric alternatives that could overcome these problems. A fairly new approach, named Bookmark standard setting method, was introduced and explained in detail as an example. This paper concluded with five considerations when the NHPLEB decides to adopt a pychometric standard setting approach to set a cutscore for a licensure test like medical licensing examination.

Author(s):  
Duck Sun Ahn ◽  
Sowon Ahn

After briefly reviewing theories of standard setting we analyzed the problems of the current cut scores. Then, we reported the results of need assessment on the standard setting among medical educators and psychometricians. Analyses of the standard setting methods of developed countries were reported as well. Based on these findings, we suggested the Bookmark and the modified Angoff methods as alternative methods for setting standard. Possible problems and challenges were discussed when these methods were applied to the National Medical Licensing Examination.


2012 ◽  
Vol 35 (2) ◽  
pp. 173-173 ◽  
Author(s):  
Keh-Min Liu ◽  
Tsuen-Chiuan Tsai ◽  
Shih-Li Tsai

2020 ◽  
Vol 34 (05) ◽  
pp. 8822-8829
Author(s):  
Sheng Shen ◽  
Yaliang Li ◽  
Nan Du ◽  
Xian Wu ◽  
Yusheng Xie ◽  
...  

Question answering (QA) has achieved promising progress recently. However, answering a question in real-world scenarios like the medical domain is still challenging, due to the requirement of external knowledge and the insufficient quantity of high-quality training data. In the light of these challenges, we study the task of generating medical QA pairs in this paper. With the insight that each medical question can be considered as a sample from the latent distribution of questions given answers, we propose an automated medical QA pair generation framework, consisting of an unsupervised key phrase detector that explores unstructured material for validity, and a generator that involves a multi-pass decoder to integrate structural knowledge for diversity. A series of experiments have been conducted on a real-world dataset collected from the National Medical Licensing Examination of China. Both automatic evaluation and human annotation demonstrate the effectiveness of the proposed method. Further investigation shows that, by incorporating the generated QA pairs for training, significant improvement in terms of accuracy can be achieved for the examination QA system. 1


Author(s):  
Janghee Park ◽  
Mi Kyoung Yim ◽  
Na Jin Kim ◽  
Duck Sun Ahn ◽  
Young-Min Kim

Purpose: The Korea Medical Licensing Exam (KMLE) typically contains a large number of items. The purpose of this study was to investigate whether there is a difference in the cut score between evaluating all items of the exam and evaluating only some items when conducting standard-setting.Methods: We divided the item sets that appeared on 3 recent KMLEs for the past 3 years into 4 subsets of each year of 25% each based on their item content categories, discrimination index, and difficulty index. The entire panel of 15 members assessed all the items (360 items, 100%) of the year 2017. In split-half set 1, each item set contained 184 (51%) items of year 2018 and each set from split-half set 2 contained 182 (51%) items of the year 2019 using the same method. We used the modified Angoff, modified Ebel, and Hofstee methods in the standard-setting process.Results: Less than a 1% cut score difference was observed when the same method was used to stratify item subsets containing 25%, 51%, or 100% of the entire set. When rating fewer items, higher rater reliability was observed.Conclusion: When the entire item set was divided into equivalent subsets, assessing the exam using a portion of the item set (90 out of 360 items) yielded similar cut scores to those derived using the entire item set. There was a higher correlation between panelists’ individual assessments and the overall assessments.


Author(s):  
Kun Hwang

The purpose of this study was to examine the opinions of medical students and physician writers regarding the medical humanities as a subject and its inclusion in the medical school curriculum. Furthermore, we addressed whether an assessment test should be added to the National Medical Licensing Examination of Korea (KMLE). A total of 192 medical students at Inha University and 39 physician writers registered with the Korean Association of Physician Essayists and the Korean Association of Physician Poets participated in this study. They were asked to answer a series of questionnaires. Most medical students (59%) and all physician writers (100%) answered that the medical humanities should be included in the medical school curriculum to train good physicians. They thought that the KMLE did not currently include an assessment of the medical humanities (medical students 69%, physician writers 69%). Most physician writers (87%; Likert scale, 4.38 ± 0.78) felt that an assessment of the medical humanities should be included in the KMLE. Half of the medical students (51%; Likert scale, 2.51 ± 1.17) were against including it in the KMLE, which they would have to pass after several years of study. For the preferred field of assessment, medical ethics was the most commonly endorsed subject (medical students 59%, physician writers 39%). The most frequently preferred evaluation method was via an interview (medical students 45%, physician writers 33%). In terms of the assessment of the medical humanities and the addition of this subject to the KMLE, an interview-based evaluation should be developed.


2008 ◽  
Vol 8 (1) ◽  
Author(s):  
Sohail Bajammal ◽  
Rania Zaini ◽  
Wesam Abuznadah ◽  
Mohammad Al-Rukban ◽  
Syed Moyn Aly ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document