scholarly journals MSK30: a validated tool to assess clinical musculoskeletal knowledge

2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.

Author(s):  
Catherine A Ulman ◽  
Stephen Bruce Binder ◽  
Nicole J. Borges

This study assessed whether a current medical school curriculum is adequately preparing medical students to diagnose and treat common dermatologic conditions. A 15-item anonymous multiple choice quiz covering fifteen diseases was developed to test students’ ability to diagnose and treat common dermatologic conditions. The quiz also contained five items that assessed students’ confidence in their ability to diagnose common dermatologic conditions, their perception of whether they were receiving adequate training in dermatology, and their preferences for additional training in dermatology. The survey was performed in 2014, and was completed by 85 students (79.4%). Many students (87.6%) felt that they received inadequate training in dermatology during medical school. On average, students scored 46.6% on the 15-item quiz. Proficiency at the medical school where the study was performed is considered an overall score of greater than or equal to 70.0%. Students received an average score of 49.9% on the diagnostic items and an average score of 43.2% on the treatment items. The findings of this study suggest that United States medical schools should consider testing their students and assessing whether they are being adequately trained in dermatology. Then schools can decide if they need to re-evaluate the timing and delivery of their current dermatology curriculum, or whether additional curriculum hours or clinical rotations should be assigned for dermatologic training.


Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty

Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty

Author(s):  
Umayya Musharrafieh ◽  
Khalil Ashkar ◽  
Dima Dandashi ◽  
Maya Romani ◽  
Rana Houry ◽  
...  

Introduction: Objective Structured Clinical Examination (OSCE) is considered a useful method of assessing clinical skills besides Multiple Choice Questions (MCQs) and clinical evaluations. Aim: To explore the acceptance of medical students to this assessment tool in medical education and to determine whether the assessment results of MCQs and faculty clinical evaluations agree with the respective OSCE scores of 4th year medical students (Med IV). Methods: performance of a total of 223 Med IV students distributed on academic years 2006-2007, 2007-2008, and 2008-2009 in OSCE, MCQs and faculty evaluations were compared. Out of the total 93 students were asked randomly to fill a questionnaire about their attitudes and acceptance of this tool. The OSCE was conducted every two months for two different groups of medical students who had completed their family medicine rotation, while faculty evaluation based on observation by assessors was submitted on a monthly basis upon the completion of the rotation. The final exam for the family medicine clerkship was performed at the end of the 4thacademic year, and it consisted of MCQsResults: Students highly commended the OSCE as a tool of evaluation by faculty members as it provides a true measure of required clinical skills and communication skills compared to MCQs and faculty evaluation. The study showed a significant positive correlation between the OSCE scores and the clinical evaluation scores while there was no association between the OSCE score and the final exam scores.Conclusion: Student showed high appreciation and acceptance of this type of clinical skills testing. Despite the fact that OSCEs make them more stressed than other modalities of assessment, it remained the preferred one.


2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


2021 ◽  
Author(s):  
Joachim Neumann ◽  
Stephanie Simmrodt ◽  
Beatrice Bader ◽  
Bertram Opitz ◽  
Ulrich Gergs

BACKGROUND There remain doubts about whether multiple choice answer formats (single choice) offer the best option to encourage deep learning or whether SC formats simply lead to superficial learning or cramming. Moreover, cueing is always a drawback in the SC format. Another way to assess knowledge is true multiple-choice questions in which one or more answers can be true and the student is not aware of how many true answers are to be anticipated (K´ or Kprime question format). OBJECTIVE Here, we compared both single-choice answers (one true answer, SC) with Kprime answers (one to four true answers out of four answers, Kprime) for the very same learning objectives in a study of pharmacology in medical students. METHODS Two groups of medical students were randomly subjected to a formative online test: group A) was first given 15 SC (#1-15) followed by 15 different Kprime questions (#16-30). The opposite design was used for group B. RESULTS The mean number of right answers was higher for SC than for Kprime questions in group A (10.02 vs. 8.63, p < 0.05) and group B (9.98 vs. 6.66, p < 0.05). The number of right answers was higher for nine questions of SC compared to Kprime in group A and for eight questions in group B (pairwise T-Test, p < 0.05). Thus, SC is easier to answer than the same learning objectives in pharmacology given as Kprime questions. One year later, four groups were formed from the previous two groups and were again given the same online test but in a different order: the main result was that all students fared better in the second test than in the initial test; however, the gain in points was highest if initially mode B was given. CONCLUSIONS Kprime is less popular with students being more demanding, but could improve memory of subject matter and thus might be more often used by meidcal educators.


2012 ◽  
Vol 11 (1) ◽  
pp. 47-57 ◽  
Author(s):  
Joyce M. Parker ◽  
Charles W. Anderson ◽  
Merle Heidemann ◽  
John Merrill ◽  
Brett Merritt ◽  
...  

We present a diagnostic question cluster (DQC) that assesses undergraduates' thinking about photosynthesis. This assessment tool is not designed to identify individual misconceptions. Rather, it is focused on students' abilities to apply basic concepts about photosynthesis by reasoning with a coordinated set of practices based on a few scientific principles: conservation of matter, conservation of energy, and the hierarchical nature of biological systems. Data on students' responses to the cluster items and uses of some of the questions in multiple-choice, multiple-true/false, and essay formats are compared. A cross-over study indicates that the multiple-true/false format shows promise as a machine-gradable format that identifies students who have a mixture of accurate and inaccurate ideas. In addition, interviews with students about their choices on three multiple-choice questions reveal the fragility of students' understanding. Collectively, the data show that many undergraduates lack both a basic understanding of the role of photosynthesis in plant metabolism and the ability to reason with scientific principles when learning new content. Implications for instruction are discussed.


2015 ◽  
Vol 39 (4) ◽  
pp. 327-334 ◽  
Author(s):  
Brandon M. Franklin ◽  
Lin Xiang ◽  
Jason A. Collett ◽  
Megan K. Rhoads ◽  
Jeffrey L. Osborn

Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions ( P < 0.05, d = 1.109) but not multiple-choice exam questions ( P = 0.071, d = 0.716) or final course outcome ( P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success.


Sign in / Sign up

Export Citation Format

Share Document