scholarly journals Objective predictors of intern performance

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Amanda C. Filiberto ◽  
Lou Ann Cooper ◽  
Tyler J. Loftus ◽  
Sonja S. Samant ◽  
George A. Sarosi ◽  
...  

Abstract Background Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. Methods This single institution, retrospective cohort analysis included 244 graduates from four classes (2015–2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch’s ANOVA and follow-up pairwise t-tests. Results Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). Conclusions Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.

2020 ◽  
pp. 000313482097338
Author(s):  
Haley Ehrlich ◽  
Mason Sutherland ◽  
Mark McKenney ◽  
Adel Elkbuli

Background United States Medical Licensing Examination (USMLE) Step 1 will transition to pass/fail score by 2022. We aim to investigate US medical students’ perspectives on the potential implications this transition would have on their education and career opportunities. Methods A cross-sectional study investigating US medical students’ perspectives on the implications of transition of the USMLE Step 1 exam to pass/fail. Students were asked their preferences regarding various aspects of the USMLE Step 1 examination, including activities, educational opportunities, expenses regarding preparation for the examination, and future career opportunities. Results 215 medical students responded to the survey, 59.1% were women, 80.9% were allopathic vs. 19.1% osteopathic students. 34.0% preferred the USMLE Step 1 to be graded on a pass/fail score, whereas 53.5% preferred a numeric scale. Osteopathic vs. allopathic students were more likely to report that the pass/fail transition will negatively impact their residency match (aOR = 1.454, 95% CI: 0.515, 4.106) and specialty of choice (aOR = 3.187, 95% CI: 0.980, 10.359). 57.7% of respondents reported that the transition to a pass/fail grading system will change their study habits. Conclusions The transition of the USMLE Step 1 to a pass/fail system has massive implications on medical students and residency programs alike. Though the majority of medical students did not prefer the USMLE Step 1 to have a pass/fail score, they must adapt their strategies to remain competitive for residency applications. Residency programs should create a composite score based off all aspects of medical students’ applications in order to create a holistic and fair evaluation and ranking system.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ling Wang ◽  
Heather S. Laird-Fick ◽  
Carol J. Parker ◽  
David Solomon

Abstract Background Medical students must meet curricular expectations and pass national licensing examinations to become physicians. However, no previous studies explicitly modeled stages of medical students acquiring basic science knowledge. In this study, we employed an innovative statistical model to characterize students’ growth using progress testing results over time and predict licensing examination performance. Methods All students matriculated from 2016 to 2017 in our medical school with USMLE Step 1 test scores were included in this retrospective cohort study (N = 358). Markov chain method was employed to: 1) identify latent states of acquiring scientific knowledge based on progress tests and 2) estimate students’ transition probabilities between states. The primary outcome of this study, United States Medical Licensing Examination (USMLE) Step 1 performance, were predicted based on students’ estimated probabilities in each latent state identified by Markov chain model. Results Four latent states were identified based on students’ progress test results: Novice, Advanced Beginner I, Advanced Beginner II and Competent States. At the end of the first year, students predicted to remain in the Novice state had lower mean Step 1 scores compared to those in the Competent state (209, SD = 14.8 versus 255, SD = 10.8 respectively) and had more first attempt failures (11.5% versus 0%). On regression analysis, it is found that at the end of the first year, if there was 10% higher chance staying in Novice State, Step 1 scores will be predicted 2.0 points lower (95% CI: 0.85–2.81 with P < .01); while 10% higher chance in Competent State, Step 1scores will be predicted 4.3 points higher (95% CI: 2.92–5.19 with P < .01). Similar findings were also found at the end of second year medical school. Conclusions Using the Markov chain model to analyze longitudinal progress test performance offers a flexible and effective estimation method to identify students’ transitions across latent stages for acquiring scientific knowledge. The results can help identify students who are at-risk for licensing examination failure and may benefit from targeted academic support.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 18-23 ◽  
Author(s):  
Steven J. Durning ◽  
Ting Dong ◽  
Paul A. Hemmer ◽  
William R. Gilliland ◽  
David F. Cruess ◽  
...  

ABSTRACT Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


2014 ◽  
Vol 38 (4) ◽  
pp. 315-320 ◽  
Author(s):  
Teresa R. Johnson ◽  
Mohammed K. Khalil ◽  
Richard D. Peppler ◽  
Diane D. Davey ◽  
Jonathan D. Kibble

In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models.


2019 ◽  
Vol 2 (5) ◽  
Author(s):  
Misbah Keen ◽  
Danielle Bienz ◽  
Toby Keys ◽  
Douglas Schaad ◽  
David Evans

Introduction: The University of Washington School of Medicine has six campuses in the five state WWAMI (Washington, Wyoming, Alaska, Montana and Idaho) region. The WRITE (WWAMI Rural Integrated Training Experience) program is a 22 to 24 week long rural longitudinal integrated clerkship experience offered to medical students in their clinical phase (third year) of training. This program seeks to meet the rural workforce needs of the WWAMI region by increasing the number of medical students going into primary care. Critics of LIC’s have expressed concern about overall quality control of the more remote educational experience and the lack of specialty specific teaching.  The aim of this study was to compare medical school and PGY-1 performance of WRITE and Non-WRITE students while determining how well each cohort is meeting the regional workforce needs. Methods: The study group was all UWSOM students who matriculated from 2009 to 2013, advanced to graduation, and subsequently matched to a residency through the National Residency Match Program. WRITE and non-WRITE cohorts were compared for USMLE step 1 and 2 scores, MSPE (Medical Student Performance Evaluation) key word, and self and program director assessments in the first year of residency. The match results of the two cohorts were also compared to determine the proportions entering primary care residencies. Finally, for both cohorts the specialty choice at matriculation was compared with the match results. Descriptive statistics were used to test the comparisons. Results: The medical school performance of the WRITE and Non-WRITE cohorts was equivalent in all metrics (USMLE Step 1 and 2, MSPE key word, self and program director assessment of performance in the first year of residency). WRITE students were significantly more likely to match into primary care (67.6% vs 48.3%, p<0.001) overall and, in particular, Family Medicine as their specialty (40% vs 14.3%, p<0.001).  WRITE students were also more likely to match into the same specialty that they indicated on the UWSOM matriculation survey. For Family Medicine the loss of fidelity between matriculation and match among WRITE students was 3% (43.4 - 40.4) and among Non-WRITE students, it was 6.3% (20.6 - 14.3). Conclusions: Performance outcomes of the WRITE program are equivalent to a traditional block curriculum.  However, the WRITE cohort is significantly more likely to go into primary care fields, especially family medicine and is more likely to stay with the declared specialty at matriculation. Medical schools that seek to increase the number of students going into primary care may benefit from adopting a similar model.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 104-108 ◽  
Author(s):  
Anthony R. Artino ◽  
Ting Dong ◽  
David F. Cruess ◽  
William R. Gilliland ◽  
Steven J. Durning

ABSTRACT Background: Using a previously developed postgraduate year (PGY)-1 program director's evaluation survey, we developed a parallel form to assess more senior residents (PGY-3). The PGY-3 survey, which aligns with the core competencies established by the Accreditation Council for Graduate Medical Education, also includes items that reflect our institution's military-unique context. Purpose: To collect feasibility, reliability, and validity evidence for the new PGY-3 evaluation. Methods: We collected PGY-3 data from program directors who oversee the education of military residents. The current study's cohort consisted of Uniformed Services University of the Health Sciences students graduating in 2008, 2009, and 2010. We performed exploratory factor analysis (EFA) to examine the internal structure of the survey and subjected each of the factors identified in the EFA to an internal consistency reliability analysis. We then performed correlation analysis to examine the relationships between PGY-3 ratings and several outcomes: PGY-1 ratings, cumulative medical school grade point average (GPA), and performance on U.S. Medical Licensing Examinations (USMLE) Step 1, Step 2 Clinical Knowledge, and Step 3. Results: Of the 510 surveys we distributed, 388 (76%) were returned. Results from the EFA suggested four factors: “Medical Expertise,” “Professionalism,” “Military-unique Practice,” and “Systems-based Practice.” Scores on these four factors showed good internal consistency reliability, as measured by Cronbach's α (α ranged from 0.92 to 0.98). Further, as expected, “Medical Expertise” and “Professionalism” had small to moderate correlations with cumulative medical school GPA and performance on the USMLE Step examinations. Conclusions: The new program director's evaluation survey instrument developed in this study appears to be feasible, and the scores that emerged have reasonable evidence of reliability and validity in a sample of third-year residents.


2021 ◽  
Vol 9 (39) ◽  
pp. 63-72
Author(s):  
Winnie Wu ◽  
Katy Garcia ◽  
Sheila Chandrahas ◽  
Arham Siddiqui ◽  
Regina Baronia ◽  
...  

Background 94% of program directors cited the USMLE Step 1 score as the most important factor in determining an applicant’s competitiveness for residency. Thus, medical students are motivated to attain the highest possible score. With the recent announcement of Step 1 switching to a pass/fail standard, it is important to analyze factors which predict meeting this goal. Objective To investigate the factors which can influence or predict performance on USMLE Step 1. Methods We conducted a systematic literature search on PubMed, Web of Science, Scopus and ERIC in 2019. The key words were “USMLE”, “Step-1”, “score”, “success” and “predictors.” The search included articles published between 2005 and 2019.  Studies that did not focus on Step 1 outcome or allopathic medical students in the United States were excluded. Results 275 articles were found, 29 of which met our inclusion criteria. Analysis from these articles demonstrated that predictors of USMLE Step 1 score can be divided into unmodifiable and modifiable factors. Unmodifiable factors include gender, MCAT score, pre-clinical grades and NBME/CBSE scores. Modifiable factors include taking USMLE Step 1 within two months of completing pre-clinical courses, motivation from anxiety, multiple-choice questions completed, unique Anki cards seen and complete passes of First Aid for the USMLE Step 1. Conclusion Our review suggests that although students can focus on modifiable factors to increase their score, the energy expenditure required to increase Step 1 score by one point is unrealistic. This may have impacted the NBME’s decision to change Step 1 to a pass/fail exam.


2020 ◽  
Vol 12 (02) ◽  
pp. e277-e283
Author(s):  
David Cui ◽  
Ingrid U. Scott ◽  
Heidi Luise Wingert

Abstract Purpose This article investigates the perspectives of ophthalmology residency program directors (PDs) regarding the impact of the United States Medical Licensing Examination (USMLE) Step 1 change from graded to pass-fail scoring on ophthalmology resident selection and medical education. Methods The PDs of all United States ophthalmology residency programs accredited by the Accreditation Council for Graduate Medical Education were identified using a public, online database. An anonymous web-based survey constructed using REDCap was emailed to each PD in February 2020. Results Surveys were completed by 64 (54.2%) PDs, with the majority (81.2%) disagreeing with the change to pass-fail scoring. The majority of PDs believe this change will negatively impact the ability to evaluate residency applicants (92.1%) and achieve a fair and meritocratic match process (76.6%), and will decrease medical students' basic science knowledge (75.0%). The factors identified most frequently by PDs as becoming more important in evaluating residency applicants as a result of the Step 1 scoring change include clerkship grades (90.6%), USMLE Step 2 Clinical Knowledge score (84.4%), and a rotation in the PD's department (79.7%). The majority of PDs believe the Step 1 grading change to pass-fail will benefit applicants from elite medical schools (60.9%), and disadvantage applicants from nonelite allopathic schools (82.8%), international medical graduate applicants (76.6%), and osteopathic applicants (54.7%). Conclusion The majority of ophthalmology PDs disagree with the change in USMLE Step 1 scoring from graded to pass-fail and believe this change will negatively impact the ability to evaluate residency applicants and achieve a fair and meritocratic match process, and will decrease medical students' basic science knowledge.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Christina Gillezeau ◽  
Wil Lieberman-Cribbin ◽  
Kristin Bevilacqua ◽  
Julio Ramos ◽  
Naomi Alpert ◽  
...  

Abstract Background Although the value of DACA medical students has been hypothesized, no data are available on their contribution to US healthcare. While the exact number of DACA recipients in medical school is unknown, DACA medical students are projected to represent an increasing proportion of physicians in the future. The current literature on DACA students has not analyzed the experiences of these students. Methods A mixed-methods study on the career intentions and experiences of DACA medical students was performed utilizing survey data and in-depth interviews. The academic performance of a convenience sample of DACA medical students was compared to that of matriculated medical students from corresponding medical schools, national averages, and first-year residents according to specialty. Results Thirty-three DACA medical students completed the survey and five participated in a qualitative interview. The average undergraduate GPA (SD) of the DACA medical student sample was 3.7 (0.3), the same as the national GPA of 2017–2018 matriculated medical students. The most common intended residency programs were Internal Medicine (27.2%), Emergency Medicine (15.2%), and Family Medicine (9.1%). In interviews, DACA students discussed their motivation for pursuing medicine, barriers and facilitators that they faced in attending medical school, their experiences as medical students, and their future plans. Conclusions The intent of this sample to pursue medical specialties in which there is a growing need further exemplifies the unique value of these students. It is vital to protect the status of DACA recipients and realize the contributions that DACA physicians provide to US healthcare.


Sign in / Sign up

Export Citation Format

Share Document