faculty assessments
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Jinwoo Jeong ◽  
Song Yi Park ◽  
Kyung Hoon Sun

Purpose: In medical education, peer assessment is considered to be an effective learning strategy. Although several studies have examined agreement between peer and faculty assessments regarding basic life support (BLS), few studies have done so for advanced resuscitation skills (ARS) such as intubation and defibrillation. Therefore, this study aimed to determine the degree of agreement between medical students’ and faculty assessments of ARS examinations.Methods: This retrospective explorative study was conducted during the emergency medicine (EM) clinical clerkship of fourth-year medical students from April to July 2020. A faculty assessor (FA) and a peer assessor (PA) assessed each examinee’s resuscitation skills (including BLS, intubation, and defibrillation) using a checklist that consisted of 20 binary items (performed or not performed) and 1 global proficiency rating using a 5-point Likert scale. The prior examinee assessed the next examinee after feedback and training as a PA. All 54 students participated in peer assessment. The assessments of 44 FA/PA pairs were analyzed using the intraclass correlation coefficient (ICC) and Gwet’s first-order agreement coefficient.Results: The PA scores were higher than the FA scores (mean±standard deviation, 20.2±2.5 [FA] vs. 22.3±2.4 [PA]; P<0.001). The agreement was poor to moderate for the overall checklist (ICC, 0.55; 95% confidence interval [CI], 0.31 to 0.73; P<0.01), BLS (ICC, 0.19; 95% CI, -0.11 to 0.46; P<0.10), intubation (ICC, 0.51; 95% CI, 0.26 to 0.70; P<0.01), and defibrillation (ICC, 0.49; 95% CI, 0.23 to 0.68; P<0.01).Conclusion: Senior medical students showed unreliable agreement in ARS assessments compared to faculty assessments. If a peer assessment is planned in skills education, comprehensive preparation and sufficient assessor training should be provided in advance.



2020 ◽  
Author(s):  
Lynn Bellamy ◽  
Barry McNeill ◽  
Veronica Burrows


2019 ◽  
Vol 35 ◽  
pp. 130-134
Author(s):  
Patrik Lyngå ◽  
Italo Masiello ◽  
Klas Karlgren ◽  
Eva Joelsson-Alm


2018 ◽  
Vol 9 (4) ◽  
pp. e78-92 ◽  
Author(s):  
Don Thiwanka Wijeratne ◽  
Siddhartha Srivastava ◽  
Barry Chan ◽  
Wilma Hopman ◽  
Benjamin Thomson

Background: Competency Based Medical Education (CBME) designates physical examination competency as an Entrustable Professional Activity (EPA). Considerable concern persists regarding the increased time burden CBME may place on educators. We developed a novel physical examination curriculum that shifted the burden of physical examination case preparation and performance assessment from faculty to residents. Our first objective was to determine if participation led to sustainable improvements in physical examination skills. The second objective was to determine if resident peer assessment was comparable to faculty assessment.    Methods: We selected physical exam case topics based on the Objectives of Training in the Specialty of Internal Medicine as prescribed by the Royal College of Physicians and Surgeons of Canada. Internal Medicine residents compiled evidence-based physical exam checklists that faculty reviewed before distribution to all learners. Physical exam practice sessions with whole-group demonstration followed by small-group practice sessions were performed weekly. We evaluated this pilot curriculum with a formative OSCE, during which a resident peer and a faculty member simultaneously observed and assessed examinee performance by .Results: Participation in the novel curriculum practice sessions improved OSCE performance (faculty score mean 78.96 vs. 62.50, p<0.05). Peer assessment overestimated faculty scores (76.2 vs. 65.7, p<0.001), but peer and faculty assessments were highly correlated (R2 = 0.73 (95% CI 0.50-0.87).Conclusion: This novel physical examination curriculum leads to sustainable improvement of physical examination skills. Peer assessment correlated well with the gold standard faculty assessment. This resident-led physical examination curriculum enhanced physical examination skills in a CBME environment, with minimal time commitment from faculty members.



Author(s):  
Kylie Fitzgerald ◽  
Brett Vaughan

Purpose: Peer assessment provides a framework for developing expected skills and receiving feedback appropriate to the learner’s level. Near-peer (NP) assessment may elevate expectations and motivate learning. Feedback from peers and NPs may be a sustainable way to enhance student assessment feedback. This study analysed relationships among self, peer, NP, and faculty marking of an assessment and students’ attitudes towards marking by those various groups.Methods: A cross-sectional study design was used. Year 2 osteopathy students (n= 86) were invited to perform self and peer assessments of a clinical history-taking and communication skills assessment. NPs and faculty also marked the assessment. Year 2 students also completed a questionnaire on their attitudes to peer/NP marking. Descriptive statistics and the Spearman rho coefficient were used to evaluate relationships across marker groups.Results: Year 2 students (n= 9), NPs (n= 3), and faculty (n= 5) were recruited. Correlations between self and peer (r= 0.38) and self and faculty (r= 0.43) marks were moderate. A weak correlation was observed between self and NP marks (r= 0.25). Perceptions of peer and NP marking varied, with over half of the cohort suggesting that peer or NP assessments should not contribute to their grade.Conclusion: Framing peer and NP assessment as another feedback source may offer a sustainable method for enhancing feedback without overloading faculty resources. Multiple sources of feedback may assist in developing assessment literacy and calibrating students’ self-assessment capability. The small number of students recruited suggests some acceptability of peer and NP assessment; however, further work is required to increase its acceptability.



2015 ◽  
Vol 7 (4) ◽  
pp. 589-594 ◽  
Author(s):  
Moshe Weizberg ◽  
Michael C. Bond ◽  
Michael Cassara ◽  
Christopher Doty ◽  
Jason Seamon

ABSTRACT Background Residents in Accreditation Council for Graduate Medical Education accredited emergency medicine (EM) residencies were assessed on 23 educational milestones to capture their progression from medical student level (Level 1) to that of an EM attending physician (Level 5). Level 1 was conceptualized to be at the level of an incoming postgraduate year (PGY)-1 resident; however, this has not been confirmed. Objectives Our primary objective in this study was to assess incoming PGY-1 residents to determine what percentage achieved Level 1 for the 8 emergency department (ED) patient care–based milestones (PC 1–8), as assessed by faculty. Secondary objectives involved assessing what percentage of residents had achieved Level 1 as assessed by themselves, and finally, we calculated the absolute differences between self- and faculty assessments. Methods Incoming PGY-1 residents at 4 EM residencies were assessed by faculty and themselves during their first month of residency. Performance anchors were adapted from ACGME milestones. Results Forty-one residents from 4 programs were included. The percentage of residents who achieved Level 1 for each subcompetency on faculty assessment ranged from 20% to 73%, and on self-assessment from 34% to 92%. The majority did not achieve Level 1 on faculty assessment of milestones PC-2, PC-3, PC-5a, and PC-6, and on self-assessment of PC-3 and PC-5a. Self-assessment was higher than faculty assessment for PC-2, PC-5b, and PC-6. Conclusions Less than 75% of PGY-1 residents achieved Level 1 for ED care-based milestones. The majority did not achieve Level 1 on 4 milestones. Self-assessments were higher than faculty assessments for several milestones.



2015 ◽  
Vol 16 (1) ◽  
pp. 48-53 ◽  
Author(s):  
Syed Rashid Habib ◽  
Haneef Sherfudhin

ABSTRACT Objective This study compared the student's self-grades versus the examiners grades, inter examiner grades and grades of anterior with posterior teeth in a preclinical prosthodontic course. Methods 75 students and 2 examiners participated in the study. The students prepared one anterior (upper central incisor) and one posterior (lower first molar) teeth for full veneer crowns in allocated time of 2 hours and 30 minutes. After the preparations, the students self-graded their preparations based on criteria-based evaluation forms. The examiners also completed the grading for the prepared teeth. All the grades were recorded, comparisons were made using SPSS version 21 and results tabulated. Results The means of grades (8.32) by the students themselves were found to be higher compared to the examiners grades (7.3) for the anterior as well as posterior teeth. Comparison of the grades for the anterior/posterior teeth and the overall grades showed a statistically significant difference (p = 0.000). A moderate correlation (0.399) and a strong correlation (0.601) were found between the grades of the faculty and the students for the anterior and posterior teeth respectively. The overall grading for the anterior and posterior teeth by the two faculty members showed no statistically significant difference (p = 0.053) and a very strong correlation (0.784). The results of the test showed a significant difference (p = 0.001) between the overall grading for anterior and posterior teeth. Conclusion Students tended to grade their teeth preparations higher compared to the examiner grades, inter examiner variation in the grades existed and the grades of the anterior teeth were higher compared to the posterior teeth. How to cite this article Habib SR, Sherfudhin H. Students’ Self-assessment: A Learning Tool and Its Comparison with the Faculty Assessments. J Contemp Dent Pract 2015;16(1):48-53.



2013 ◽  
Vol 5 (4) ◽  
pp. 582-586 ◽  
Author(s):  
James G. Ryan ◽  
David Barlas ◽  
Simcha Pollack

Abstract Background Medical knowledge (MK) in residents is commonly assessed by the in-training examination (ITE) and faculty evaluations of resident performance. Objective We assessed the reliability of clinical evaluations of residents by faculty and the relationship between faculty assessments of resident performance and ITE scores. Methods We conducted a cross-sectional, observational study at an academic emergency department with a postgraduate year (PGY)-1 to PGY-3 emergency medicine residency program, comparing summative, quarterly, faculty evaluation data for MK and overall clinical competency (OC) with annual ITE scores, accounting for PGY level. We also assessed the reliability of faculty evaluations using a random effects, intraclass correlation analysis. Results We analyzed data for 59 emergency medicine residents during a 6-year period. Faculty evaluations of MK and OC were highly reliable (κ  =  0.99) and remained reliable after stratification by year of training (mean κ  =  0.68–0.84). Assessments of resident performance (MK and OC) and the ITE increased with PGY level. The MK and OC results had high correlations with PGY level, and ITE scores correlated moderately with PGY. The OC and MK results had a moderate correlation with ITE score. When residents were grouped by PGY level, there was no significant correlation between MK as assessed by the faculty and the ITE score. Conclusions Resident clinical performance and ITE scores both increase with resident PGY level, but ITE scores do not predict resident clinical performance compared with peers at their PGY level.



Sign in / Sign up

Export Citation Format

Share Document