scholarly journals Serious Games in Surgical Medical Education: A Virtual Emergency Department as a Tool for Teaching Clinical Reasoning to Medical Students (Preprint)

2018 ◽  
Author(s):  
Seung-Hun Chon ◽  
Ferdinand Timmermann ◽  
Thomas Dratsch ◽  
Nikolai Schuelper ◽  
Patrick Plum ◽  
...  

BACKGROUND Serious games enable the simulation of daily working practices and constitute a potential tool for teaching both declarative and procedural knowledge. The availability of educational serious games offering a high-fidelity, three-dimensional environment in combination with profound medical background is limited, and most published studies have assessed student satisfaction rather than learning outcome as a function of game use. OBJECTIVE This study aimed to test the effect of a serious game simulating an emergency department (“EMERGE”) on students’ declarative and procedural knowledge, as well as their satisfaction with the serious game. METHODS This nonrandomized trial was performed at the Department of General, Visceral and Cancer Surgery at University Hospital Cologne, Germany. A total of 140 medical students in the clinical part of their training (5th to 12th semester) self-selected to participate in this experimental study. Declarative knowledge (measured with 20 multiple choice questions) and procedural knowledge (measured with written questions derived from an Objective Structured Clinical Examination station) were assessed before and after working with EMERGE. Students’ impression of the effectiveness and applicability of EMERGE were measured on a 6-point Likert scale. RESULTS A pretest-posttest comparison yielded a significant increase in declarative knowledge. The percentage of correct answers to multiple choice questions increased from before (mean 60.4, SD 16.6) to after (mean 76.0, SD 11.6) playing EMERGE (P<.001). The effect on declarative knowledge was larger in students in lower semesters than in students in higher semesters (P<.001). Additionally, students’ overall impression of EMERGE was positive. CONCLUSIONS Students self-selecting to use a serious game in addition to formal teaching gain declarative and procedural knowledge.

Author(s):  
Ting Zhou ◽  
Christian S. Loh

Studies suggest that serious games are useful tools for disaster preparedness training, but few have examined if instructional factors differentially affect the learning outcomes. This study investigated the effects of players' gaming frequency, prior knowledge, and in-game guidance received on their declarative and procedural knowledge in a disaster preparedness serious game. Findings showed that gaming frequency was not a significant predictor for learning outcomes. By contrast, players' prior knowledge, the types of in-game guidance received, and the interaction between the two were all significant predictors for the acquisition of declarative knowledge and development of procedural knowledge. The interaction term revealed a moderator effect, indicating that the relationship between a player's prior knowledge and learning outcomes was affected by the type of in-game (full or partial) guidance received.


2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


2021 ◽  
Author(s):  
Joachim Neumann ◽  
Stephanie Simmrodt ◽  
Beatrice Bader ◽  
Bertram Opitz ◽  
Ulrich Gergs

BACKGROUND There remain doubts about whether multiple choice answer formats (single choice) offer the best option to encourage deep learning or whether SC formats simply lead to superficial learning or cramming. Moreover, cueing is always a drawback in the SC format. Another way to assess knowledge is true multiple-choice questions in which one or more answers can be true and the student is not aware of how many true answers are to be anticipated (K´ or Kprime question format). OBJECTIVE Here, we compared both single-choice answers (one true answer, SC) with Kprime answers (one to four true answers out of four answers, Kprime) for the very same learning objectives in a study of pharmacology in medical students. METHODS Two groups of medical students were randomly subjected to a formative online test: group A) was first given 15 SC (#1-15) followed by 15 different Kprime questions (#16-30). The opposite design was used for group B. RESULTS The mean number of right answers was higher for SC than for Kprime questions in group A (10.02 vs. 8.63, p < 0.05) and group B (9.98 vs. 6.66, p < 0.05). The number of right answers was higher for nine questions of SC compared to Kprime in group A and for eight questions in group B (pairwise T-Test, p < 0.05). Thus, SC is easier to answer than the same learning objectives in pharmacology given as Kprime questions. One year later, four groups were formed from the previous two groups and were again given the same online test but in a different order: the main result was that all students fared better in the second test than in the initial test; however, the gain in points was highest if initially mode B was given. CONCLUSIONS Kprime is less popular with students being more demanding, but could improve memory of subject matter and thus might be more often used by meidcal educators.


2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Bela Turk ◽  
Sebastian Ertl ◽  
Guoruey Wong ◽  
Patricia P. Wadowski ◽  
Henriette Löffler-Stastka

Abstract Background Case-Based Learning (CBL) has seen widespread implementation in undergraduate education since the early 1920s. Ample data has shown CBL to be an enjoyable and motivational didactic tool, and effective in assisting the expansion of declarative and procedural knowledge in academia. Although a plethora of studies apply multiple choice questions (MCQs) in their investigation, few studies measure CBL or case-based blended learning (CBBL)-mediated changes in students’ procedural knowledge in practice or employ comparison or control groups in isolating causal relationships. Methods Utilizing the flexibilities of an e-learning platform, a CBBL framework consisting of a) anonymized patient cases, b) case-related textbook material and online e-CBL modules, and c) simulated patient (SP) contact seminars, was developed and implemented in multiple medical fields for undergraduate medical education. Additionally, other fields saw a solo implementation of e-CBL in the same format. E- cases were constructed according to the criteria of Bloom’s taxonomy. In this study, Objective Structured Clinical Examination (OSCE) results from 1886 medical students were analyzed in total, stratified into the following groups: medical students in 2013 (n = 619) before CBBL implementation, and after CBBL implementation in 2015 (n = 624) and 2016 (n = 643). Results A significant improvement (adjusted p = .002) of the mean OSCE score by 1.02 points was seen between 2013 and 2015 (min = 0, max = 25). Conclusion E-Case-Based Learning is an effective tool in improving performance outcomes and may provide a sustainable learning platform for many fields of medicine in future.


10.2196/13028 ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. e13028 ◽  
Author(s):  
Seung-Hun Chon ◽  
Ferdinand Timmermann ◽  
Thomas Dratsch ◽  
Nikolai Schuelper ◽  
Patrick Plum ◽  
...  

2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Youness Touissi ◽  
Ghita Hjiej ◽  
Abderrazak Hajjioui ◽  
Azeddine Ibrahimi ◽  
Maryam Fourtassi

Sign in / Sign up

Export Citation Format

Share Document