Analysis of Korean National Medical Licensing Examination Question Items of 1992~1993 on their Levels of Cognitive Domain, Types of Multiple Choice Questions and the Contents of Medical Knowledge Tested

1970 ◽  
Vol 5 (2) ◽  
pp. 11-17
Author(s):  
Kwang Ho Meng ◽  
Bong Kyu Kang ◽  
Se Hoon Lee
2021 ◽  
pp. 016327872110469
Author(s):  
Peter Baldwin ◽  
Janet Mee ◽  
Victoria Yaneva ◽  
Miguel Paniagua ◽  
Jean D’Angelo ◽  
...  

One of the most challenging aspects of writing multiple-choice test questions is identifying plausible incorrect response options—i.e., distractors. To help with this task, a procedure is introduced that can mine existing item banks for potential distractors by considering the similarities between a new item’s stem and answer and the stems and response options for items in the bank. This approach uses natural language processing to measure similarity and requires a substantial pool of items for constructing the generating model. The procedure is demonstrated with data from the United States Medical Licensing Examination (USMLE®). For about half the items in the study, at least one of the top three system-produced candidates matched a human-produced distractor exactly; and for about one quarter of the items, two of the top three candidates matched human-produced distractors. A study was conducted in which a sample of system-produced candidates were shown to 10 experienced item writers. Overall, participants thought about 81% of the candidates were on topic and 56% would help human item writers with the task of writing distractors.


2012 ◽  
Vol 35 (2) ◽  
pp. 173-173 ◽  
Author(s):  
Keh-Min Liu ◽  
Tsuen-Chiuan Tsai ◽  
Shih-Li Tsai

2020 ◽  
Vol 34 (05) ◽  
pp. 8822-8829
Author(s):  
Sheng Shen ◽  
Yaliang Li ◽  
Nan Du ◽  
Xian Wu ◽  
Yusheng Xie ◽  
...  

Question answering (QA) has achieved promising progress recently. However, answering a question in real-world scenarios like the medical domain is still challenging, due to the requirement of external knowledge and the insufficient quantity of high-quality training data. In the light of these challenges, we study the task of generating medical QA pairs in this paper. With the insight that each medical question can be considered as a sample from the latent distribution of questions given answers, we propose an automated medical QA pair generation framework, consisting of an unsupervised key phrase detector that explores unstructured material for validity, and a generator that involves a multi-pass decoder to integrate structural knowledge for diversity. A series of experiments have been conducted on a real-world dataset collected from the National Medical Licensing Examination of China. Both automatic evaluation and human annotation demonstrate the effectiveness of the proposed method. Further investigation shows that, by incorporating the generated QA pairs for training, significant improvement in terms of accuracy can be achieved for the examination QA system. 1


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Surajit Kundu ◽  
Jaideo M Ughade ◽  
Anil R Sherke ◽  
Yogita Kanwar ◽  
Samta Tiwari ◽  
...  

Background: Multiple-choice questions (MCQs) are the most frequently accepted tool for the evaluation of comprehension, knowledge, and application among medical students. In single best response MCQs (items), a high order of cognition of students can be assessed. It is essential to develop valid and reliable MCQs, as flawed items will interfere with the unbiased assessment. The present paper gives an attempt to discuss the art of framing well-structured items taking kind help from the provided references. This article puts forth a practice for committed medical educators to uplift the skill of forming quality MCQs by enhanced Faculty Development programs (FDPs). Objectives: The objective of the study is also to test the quality of MCQs by item analysis. Methods: In this study, 100 MCQs of set I or set II were distributed to 200 MBBS students of Late Shri Lakhiram Agrawal Memorial Govt. Medical College Raigarh (CG) for item analysis for quality MCQs. Set I and Set II were MCQs which were formed by 60 medical faculty before and after FDP, respectively. All MCQs had a single stem with three wrong and one correct answers. The data were entered in Microsoft excel 2016 software to analyze. The difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE) were the item analysis parameters used to evaluate the impact on adhering to the guidelines for framing MCQs. Results: The mean calculated difficulty index, discrimination index, and distractor efficiency were 56.54%, 0.26, and 89.93%, respectively. Among 100 items, 14 items were of higher difficulty level (DIF I < 30%), 70 were of moderate category, and 16 items were of easy level (DIF I > 60%). A total of 10 items had very good DI (0.40), 32 had recommended values (0.30 - 0.39), and 25 were acceptable with changes (0.20 - 0.29). Of the 100 MCQs, there were 27 MCQs with DE of 66.66% and 11 MCQs with DE of 33.33%. Conclusions: In this study, higher cognitive-domain MCQs increased after training, recurrent-type MCQ decreased, and MCQ with item writing flaws reduced, therefore making our results much more statistically significant. We had nine MCQs that satisfied all the criteria of item analysis.


Author(s):  
Guemin Lee

National Health Personnel Licensing Examination Board (hereafter NHPLEB) has used 60% correct responses of overall tests and 40% correct responses of each subject area test as a criterion to give physician licenses to satisfactory candidates. The 60%-40% criterion seems reasonable to laypersons without pychometric or measurement knowledge, but it may causes several severe problems on pychometrician's perspective. This paper pointed out several problematic cases that can be encountered by using the 60%-40% criterion, and provided several pychometric alternatives that could overcome these problems. A fairly new approach, named Bookmark standard setting method, was introduced and explained in detail as an example. This paper concluded with five considerations when the NHPLEB decides to adopt a pychometric standard setting approach to set a cutscore for a licensure test like medical licensing examination.


2001 ◽  
Vol 56 (3) ◽  
pp. 69-74 ◽  
Author(s):  
Maurício de Maio ◽  
Marcus Castro Ferreira

PURPOSE: The Internet expands the range and flexibility of teaching options and enhances the ability to process the ever-increasing volume of medical knowledge. The aim of this study is to describe and discuss our experience with transforming a traditional medical training course into an Internet-based course. METHOD: Sixty-nine students were enrolled for a one-month course. They answered pre- and post-course questionnaires and took a multiple-choice test to evaluate the acquired knowledge. RESULTS: Students reported that the primary value for them of this Internet-based course was that they could choose the time of their class attendance (67%). The vast majority (94%) had a private computer and were used to visiting the Internet (75%) before the course. During the course, visits were mainly during the weekends (35%) and on the last week before the test (29%). Thirty-one percent reported that they could learn by reading only from the computer screen, without the necessity of printed material. Students were satisfied with this teaching method as evidenced by the 89% who reported enjoying the experience and the 88% who said they would enroll for another course via the Internet. The most positive aspect was freedom of scheduling, and the most negative was the lack of personal contact with the teacher. From the 80 multiple-choice questions, the mean of correct answers was 45.5, and of incorrect, 34.5. CONCLUSIONS: This study demonstrates that students can successfully learn with distance learning. It provides useful information for developing other Internet-based courses. The importance of this new tool for education in a large country like Brazil seems clear.


Author(s):  
Kun Hwang

The purpose of this study was to examine the opinions of medical students and physician writers regarding the medical humanities as a subject and its inclusion in the medical school curriculum. Furthermore, we addressed whether an assessment test should be added to the National Medical Licensing Examination of Korea (KMLE). A total of 192 medical students at Inha University and 39 physician writers registered with the Korean Association of Physician Essayists and the Korean Association of Physician Poets participated in this study. They were asked to answer a series of questionnaires. Most medical students (59%) and all physician writers (100%) answered that the medical humanities should be included in the medical school curriculum to train good physicians. They thought that the KMLE did not currently include an assessment of the medical humanities (medical students 69%, physician writers 69%). Most physician writers (87%; Likert scale, 4.38 ± 0.78) felt that an assessment of the medical humanities should be included in the KMLE. Half of the medical students (51%; Likert scale, 2.51 ± 1.17) were against including it in the KMLE, which they would have to pass after several years of study. For the preferred field of assessment, medical ethics was the most commonly endorsed subject (medical students 59%, physician writers 39%). The most frequently preferred evaluation method was via an interview (medical students 45%, physician writers 33%). In terms of the assessment of the medical humanities and the addition of this subject to the KMLE, an interview-based evaluation should be developed.


Author(s):  
Myoung Soo Kim ◽  
Chun-Bae Kim ◽  
Byung Ho Cha ◽  
Ki Chang Park ◽  
Sang Ok Kwon ◽  
...  

Korean Medical Licensing Examination(KMLE) 2002 focused on evaluation of the integrative medical knowledge such as primary clinical care or problem-solving competence. We analyzed the correlation among the year-wise student academic scores(grade score), trial examination scores and KMLE score by correlation analysis and multiple regression method. Four times of trial examination were taken in 2001, which were composed according to the principles of KMLE. Trial examination scores were significantly correlated with student grade scores (p<0.05). KMLE score also correlated with student grade score a nd trial examination score. The grade score at senior had higher correlation coefficient than the grade score at junior in correlation analysis. In multiple regressions, grade score at senior and mean score of trial examinatio n score were significant variants affecting KMLE score. Based on this result, regression formula such as [KMLE score] = 110.596 + 21.449*[6th grade score of student] + 0.577*[mean of trial examination score] was established (R2=0.764, p<0.001). Our results show that the trial examination is useful evaluation tool for final assessment of medical achievements. Also a trial examination is used as a reference data for student guidance and control in preparing for KMLE.


Sign in / Sign up

Export Citation Format

Share Document