scholarly journals Grading reflective essays: the reliability of a newly developed tool- GRE-9

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Nisrine N. Makarem ◽  
Basem R. Saab ◽  
Grace Maalouf ◽  
Umayya Musharafieh ◽  
Fadila Naji ◽  
...  

Abstract Background The main objective of this study is the development of a short reliable easy-to-use assessment tool in the aim of providing feedback to the reflective writings of medical students and residents. Methods This study took place in a major tertiary academic medical center in Beirut, Lebanon. Seventy-seven reflective essays written by 18 residents in the department of Family Medicine at the American University of Beirut Medical Center (AUBMC) were graded by 3 raters using the newly developed scale to assess the scale reliability. Following a comprehensive search and analysis of the literature, and based on their experience in reflective grading, the authors developed a concise 9-item scale to grade reflective essays through repeated cycles of development and analysis as well as the determination of the inter-rater reliability (IRR) using intra-class correlation coefficients (ICC) and Krippendorff’s Alpha. Results The inter-rater reliability of the new scale ranges from moderate to substantial with ICC of 0.78, 95% CI 0.64–0.86, p < 0.01 and Krippendorff’s Alpha was 0.49. Conclusions The newly developed scale, GRE-9, is a short, concise, easy-to-use reliable grading tool for reflective essays that has demonstrated moderate to substantial inter-rater reliability. This will enable raters to objectively grade reflective essays and provide informed feedback to residents and students.

2021 ◽  
Author(s):  
Lori Schirle ◽  
Alvin D Jeffery ◽  
Ali Yaqoob ◽  
Sandra Sanchez-Roige ◽  
David C. Samuels

Background: Although electronic health records (EHR) have significant potential for the study of opioid use disorders (OUD), detecting OUD in clinical data is challenging. Models using EHR data to predict OUD often rely on case/control classifications focused on extreme opioid use. There is a need to expand this work to characterize the spectrum of problematic opioid use. Methods: Using a large academic medical center database, we developed 2 data-driven methods of OUD detection: (1) a Comorbidity Score developed from a Phenome-Wide Association Study of phenotypes associated with OUD and (2) a Text-based Score using natural language processing to identify OUD-related concepts in clinical notes. We evaluated the performance of both scores against a manual review with correlation coefficients, Wilcoxon rank sum tests, and area-under the receiver operating characteristic curves. Records with the highest Comorbidity and Text-based scores were re-evaluated by manual review to explore discrepancies. Results: Both the Comorbidity and Text-based OUD risk scores were significantly elevated in the patients judged as High Evidence for OUD in the manual review compared to those with No Evidence (p = 1.3E-5 and 1.3E-6, respectively). The risk scores were positively correlated with each other (rho = 0.52, p < 0.001). AUCs for the Comorbidity and Text-based scores were high (0.79 and 0.76, respectively). Follow-up manual review of discrepant findings revealed strengths of data-driven methods over manual review, and opportunities for improvement in risk assessment. Conclusion: Risk scores comprising comorbidities and text offer differing but synergistic insights into characterizing problematic opioid use. This pilot project establishes a foundation for more robust work in the future.


1999 ◽  
Vol 45 (6) ◽  
pp. 757-770 ◽  
Author(s):  
Michael L Astion ◽  
Sara Kim ◽  
Amanda Nelson ◽  
Paul J Henderson ◽  
Carla Phillips ◽  
...  

Abstract Background: The microscopic examination of urine sediment is one of the most commonly performed microscope-based laboratory tests, but despite its widespread use, there has been no detailed study of the competency of medical technologists in performing this test. One reason for this is the lack of an effective competency assessment tool that can be applied uniformly across an institution. Methods: This study describes the development and implementation of a computer program, Urinalysis-ReviewTM, which periodically tests competency in microscopic urinalysis and then summarizes individual and group test results. In this study, eight Urinalysis-Review exams were administered over 2 years to medical technologists (mean, 58 technologists per exam; range, 44–77) at our academic medical center. The eight exams contained 80 test questions, consisting of 72 structure identification questions and 8 quantification questions. The 72 structure questions required the identification of 134 urine sediment structures consisting of 63 examples of cells, 25 of casts, 18 of normal crystals, 8 of abnormal crystals, and 20 of organisms or artifacts. Results: Overall, the medical technologists correctly identified 84% of cells, 72% of casts, 79% of normal crystals, 65% of abnormal crystals, and 81% of organisms and artifacts, and correctly answered 89% of the quantification questions. The results are probably a slight underestimate of competency because the images were analyzed without the knowledge of urine chemistry results. Conclusions: The study shows the feasibility of using a computer program for competency assessment in the clinical laboratory. In addition, the study establishes baseline measurements of competency that other laboratories can use for comparison, and which we will use in future studies that measure the effect of continuing education efforts in microscopic urinalysis.


2018 ◽  
Vol 75 (13) ◽  
pp. 987-992 ◽  
Author(s):  
Amber Lanae Martirosov ◽  
Angela Michael ◽  
Melissa McCarty ◽  
Opal Bacon ◽  
John R. DiLodovico ◽  
...  

Author(s):  
Kelsey Leonard Grabeel ◽  
Jennifer Russomanno ◽  
Sandy Oelschlegel ◽  
Emily Tester ◽  
Robert Eric Heidel

Objective: The research compared and contrasted hand-scoring and computerized methods of evaluating the grade level of patient education materials that are distributed at an academic medical center in east Tennessee and sought to determine if these materials adhered to the American Medical Association’s (AMA’s) recommended reading level of sixth grade.Methods: Librarians at an academic medical center located in the heart of Appalachian Tennessee initiated the assessment of 150 of the most used printed patient education materials. Based on the Flesch-Kincaid (F-K) scoring rubric, 2 of the 150 documents were excluded from statistical comparisons due to the absence of text (images only). Researchers assessed the remaining 148 documents using the hand-scored Simple Measure of Gobbledygook (SMOG) method and the computerized F-K grade level method. For SMOG, 3 independent reviewers hand-scored each of the 150 documents. For F-K, documents were analyzed using Microsoft Word. Reading grade levels scores were entered into a database for statistical analysis. Inter-rater reliability was calculated using intra-class correlation coefficients (ICC). Paired t-tests were used to compare readability means.Results: Acceptable inter-rater reliability was found for SMOG (ICC=0.95). For the 148 documents assessed, SMOG produced a significantly higher mean reading grade level (M=9.6, SD=1.3) than F-K (M=6.5, SD=1.3; p<0.001). Additionally, when using the SMOG method of assessment, 147 of the 148 documents (99.3%) scored above the AMA’s recommended reading level of sixth grade.Conclusions: Computerized health literacy assessment tools, used by many national patient education material providers, might not be representative of the actual reading grade levels of patient education materials. This is problematic in regions like Appalachia because materials may not be comprehensible to the area’s low-literacy patients. Medical librarians have the potential to advance their role in patient education to better serve their patient populations.


2002 ◽  
Vol 2 (3) ◽  
pp. 95-104 ◽  
Author(s):  
JoAnn Manson ◽  
Beverly Rockhill ◽  
Margery Resnick ◽  
Eleanor Shore ◽  
Carol Nadelson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document