0092 Using The Creation Of A Simulated Scenario As An Assessment Tool For Medical Students

Author(s):  
Miles Harrison
Food Control ◽  
2021 ◽  
Vol 126 ◽  
pp. 107980
Author(s):  
Kelsey Robson ◽  
Moira Dean ◽  
Simon A. Haughey ◽  
Christopher T. Elliott

Author(s):  
Wajiha Shadab ◽  
Amna Ahmed Noor ◽  
Saira Waqqar ◽  
Gul Muhammad Shaikh

Abstract Objective: This study aimed to assess the medical students’ opinions and views on undertaking SLICE as a formative assessment. Methods: This was a qualitative, exploratory study. Purposive sampling technique was used to select final year medical students who have undertaken a formative assessment through SLICE in their clerkship rotation. Total 32 students participated in this study .Four sets of focus group discussions (FGD) were conducted from medical students who had recently gone through their clinical clerkship modules for Pediatrics, General Medicine, General Surgery and Gynecology& Obstetrics. Each recorded FGD was transcribed verbatim. Thematic analysis was conducted manually. Themes were identified from the transcribed data, coded and analyzed. In order to achieve adequate coding and researcher reliability, investigator triangulation was performed. The initial thematic analysis was performed by the primary investigator. Thereafter, two more investigators independently analyzed the data. Before the data was finalized, all the three investigators reached a final consensus upon the themes that had emerged, ensuring triangulation of the analyzed data. Results: A four staged thematic analysis was conducted, in which five major themes and five sub-themes emerged. The main themes being: Purpose, Learning, Timing, Relevancy and Fairness of SLICE. Conclusion: The students generally thought that SLICE was effective in enhancing their clinical skills learning and should be conducted more frequently with minor adjustments. Continuous...


Author(s):  
Umayya Musharrafieh ◽  
Khalil Ashkar ◽  
Dima Dandashi ◽  
Maya Romani ◽  
Rana Houry ◽  
...  

Introduction: Objective Structured Clinical Examination (OSCE) is considered a useful method of assessing clinical skills besides Multiple Choice Questions (MCQs) and clinical evaluations. Aim: To explore the acceptance of medical students to this assessment tool in medical education and to determine whether the assessment results of MCQs and faculty clinical evaluations agree with the respective OSCE scores of 4th year medical students (Med IV). Methods: performance of a total of 223 Med IV students distributed on academic years 2006-2007, 2007-2008, and 2008-2009 in OSCE, MCQs and faculty evaluations were compared. Out of the total 93 students were asked randomly to fill a questionnaire about their attitudes and acceptance of this tool. The OSCE was conducted every two months for two different groups of medical students who had completed their family medicine rotation, while faculty evaluation based on observation by assessors was submitted on a monthly basis upon the completion of the rotation. The final exam for the family medicine clerkship was performed at the end of the 4thacademic year, and it consisted of MCQsResults: Students highly commended the OSCE as a tool of evaluation by faculty members as it provides a true measure of required clinical skills and communication skills compared to MCQs and faculty evaluation. The study showed a significant positive correlation between the OSCE scores and the clinical evaluation scores while there was no association between the OSCE score and the final exam scores.Conclusion: Student showed high appreciation and acceptance of this type of clinical skills testing. Despite the fact that OSCEs make them more stressed than other modalities of assessment, it remained the preferred one.


2021 ◽  
Vol 15 (8) ◽  
pp. 2235-2239
Author(s):  
Farrukh Sarfraz ◽  
Fahad Sarfraz ◽  
Imran Jawad ◽  
Mohammad Zia-Ul-Miraj ◽  
Rizwan Zafar Ahmad ◽  
...  

Background: To assess the competency of a student different tools are used. Since its introduction in 1975 by Dr. Harden and his team, OSCE has gained tremendous strides to assess the clinical competencies. Since 1975 onward OSCE has been very successfully used to assess the clinical competencies of medical student globally. OSCE is an assessment tool in which student is observed for performance of different tasks at specified stations. In the current study perception of medical students about OSCE examination was done which shall give room for positive criticism and further improvement of the system where ever required. Objective: To expedite view of final year MBBS students of Azra Naheed College about OSCE Material and Method Study design: Quantitative, cross sectional study. Settings: Azra Naheed College, Lahore. Duration: Six months i.e. 1st July2020 to 31st December 2020 Data Collection procedure: After an informed consent and appropriate briefing, the questionnaire was distributed among the final year medical students of Azra Naheed Medical College. Questionnaire developed by Russell et al was used. Results: Out of 148 students who participated in the study, 66(45%) students were females and 82(55%) were male. Majority of the students were satisfied with the quality of the exam. Consensus about the quality of exam was that, 29.7% were aware about the nature of the exam, 52.7% were satisfied that the syllabus taught was asked in the exam, 58.1% were satisfied about the time allocation for each station. Majority i.e. 60% considered OSCE an exam of practical nature which is not biased by gender or ethnicity. More than 50% of the students were satisfied with the standard of the exam. At the same time more than 50% students considered essay exam the easiest format of assessment. However, OSCE was considered to be fairest form of assessment 73%. 68.9% perceived that learning is enhanced by MCQs rather than other formats of assessment. Conclusion: To conclude this study, it is very much clear that the perception of students about OSCE as an assessment tool was very encouraging, as it not only provided them the opportunity to highlight their weaknesses but also helped them to perform well in the exam, manage time during exam and to overcome them stress which influenced their results. Key words: OSCE, Objective, Examinations, Clinical skills, qualitative analysis


2018 ◽  
Vol 108 (2) ◽  
pp. 145-150
Author(s):  
James M. Mahoney ◽  
Vassilios Vardaxis ◽  
Noreen Anwar ◽  
Jacob Hagenbucher

Background: This study examined the differences between faculty and trained standardized patient (SP) evaluations on student professionalism during a second-year podiatric medicine standardized simulated patient encounter. Methods: Forty-nine second-year podiatric medicine students were evaluated for their professionalism behavior. Eleven SPs performed an assessment in real-time, and one faculty member performed a secondary assessment after observing a videotape of the encounter. Five domains were chosen for evaluation from a validated professionalism assessment tool. Results: Significant differences were identified in the professionalism domains of “build a relationship” (P = .008), “gather information” (P = .001), and share information (P = .002), where the faculty scored the students higher than the SP for 24.5%, 18.9%, and 26.5% of the cases, respectively. In addition, the faculty scores were higher than the SP scores in all of the “gather information” subdomains; however, the difference in scores was significant only in the “question appropriately” (P = .001) and “listen and clarify” (P = .003) subdomains. Conclusions: This study showed that professionalism scores for second-year podiatric medical students during a simulated patient encounter varied significantly between faculty and SPs. Further consideration needs to be given to determine the source of these differences.


2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


CJEM ◽  
2017 ◽  
Vol 19 (S1) ◽  
pp. S72
Author(s):  
B. Balasubramanaiam ◽  
J. Chenkin ◽  
T.G. Snider ◽  
D. Melady ◽  
J.S. Lee

Introduction: Multiple studies since the ‘90’s demonstrate that ED staff fail to identify delirium in up to 75% of older patients. Those patients who are discharged have a 3-fold increased mortality. Methods: We iteratively developed a 14-item interprofessional tool with 4 clinical vignettes to assess comfort, knowledge and ability to identify delirium among medical students, EM residents, staff MDs and RNs. We conducted a prospective observational study using modified Dillman survey methodology. Surveys were sent on paper to residents and nurses and online to medical students and staff MDs. Results: Our response rate was 68% (38/56) for residents, 80%(16/20) for RNs; but only 37%(13/35) for staff MDs and 13%(139/1036) for medical students. Comfort with identifying delirium increased with level of medical training; 38/139(27%) 1st-4th year medical students (MS1-MS4); 25/38(66%) 1st-5th year residents (R1-R5); and 12/13(92%) staff physicians reported being comfortable (χ2=34.7, df=2, p<0.001). MS1-MS2 were the least comfortable, with only 5/82(6%) reporting comfort, increasing to 33/57(58%) among MS3-MS4 (χ2=44.9, df=1, p<0.001). A greater proportion of R4-R5 who completed a geriatric emergency medicine (Geri-EM) curriculum reported comfort, 11/12(92%) compared to 14/26(54%) of R1-R3 (χ2=19.2, df=1, p<0.05). Only 5/16(31%) nurses reported being comfortable with identifying delirium. Ability to identify all 4 clinical vignettes correctly was higher among MS3-MS4 than MS1-MS2 (32/57(56%) vs. 30/82(37%), χ2=5.2, df=1, p<0.05). There was no difference between respondents from different levels of medical training (62/139(45%) MS1-MS4, 21/38(55%) R1-R5 and 6/13(46%) staff MDs, χ2=1.4, df=2 p=0.52). There was no effect of Geri-EM completion on perfect vignette scores (6/12(50%) R4-R5 vs. 15/26(58%) R1-R3, χ2=0.20, df=1, p=0.66). There was a trend towards a lower proportion of nurses who identified all 4 clinical vignettes correctly compared to physicians (4/16(25%) vs. 27/51(53%), χ2=3.82, df=1, p=0.051). Conclusion: Our tool may be useful for assessing comfort and knowledge of delirium among ED physicians and nurses. Completion of the Geri-EM curriculum was associated with increased comfort with detecting delirium but not knowledge. Future studies should assess current ED delirium comfort and knowledge at different levels of training; between professions and examine differences nationwide.


10.2196/15444 ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. e15444
Author(s):  
Pedro Vinícius Staziaki ◽  
Rutuparna Sarangi ◽  
Ujas N Parikh ◽  
Jeffrey G Brooks ◽  
Christina Alexandra LeBedis ◽  
...  

Background Objective structured clinical examinations (OSCEs) are a useful method to evaluate medical students’ performance in the clerkship years. OSCEs are designed to assess skills and knowledge in a standardized clinical setting and through use of a preset standard grading sheet, so that clinical knowledge can be evaluated at a high level and in a reproducible way. Objective This study aimed to present our OSCE assessment tool designed specifically for radiology clerkship medical students, which we called the objective structured radiology examination (OSRE), with the intent to advance the assessment of clerkship medical students by providing an objective, structured, reproducible, and low-cost method to evaluate medical students’ radiology knowledge and the reproducibility of this assessment tool. Methods We designed 9 different OSRE cases for radiology clerkship classes with participating third- and fourth-year medical students. Each examination comprises 1 to 3 images, a clinical scenario, and structured questions, along with a standardized scoring sheet that allows for an objective and low-cost assessment. Each medical student completed 3 of 9 random examination cases during their rotation. To evaluate for reproducibility of our scoring sheet assessment tool, we used 5 examiners to grade the same students. Reproducibility for each case and consistency for each grader were assessed with a two-way mixed effects intraclass correlation coefficient (ICC). An ICC below 0.4 was deemed poor to fair, an ICC of 0.41 to 0.60 was moderate, an ICC of 0.6 to 0.8 was substantial, and an ICC greater than 0.8 was almost perfect. We also assessed the correlation of scores and the students’ clinical experience with a linear regression model and compared mean grades between third- and fourth-year students. Results A total of 181 students (156 third- and 25 fourth-year students) were included in the study for a full academic year. Moreover, 6 of 9 cases demonstrated average ICCs more than 0.6 (substantial correlation), and the average ICCs ranged from 0.36 to 0.80 (P<.001 for all the cases). The average ICC for each grader was more than 0.60 (substantial correlation). The average grade among the third-year students was 11.9 (SD 4.9), compared with 12.8 (SD 5) among the fourth-year students (P=.005). There was no correlation between clinical experience and OSRE grade (−0.02; P=.48), adjusting for the medical school year. Conclusions Our OSRE is a reproducible assessment tool with most of our OSRE cases showing substantial correlation, except for 3 cases. No expertise in radiology is needed to grade these examinations using our scoring sheet. There was no correlation between scores and the clinical experience of the medical students tested.


Sign in / Sign up

Export Citation Format

Share Document