scholarly journals Medical School Clinical Knowledge Exam Scores, Not Demographic or Other Factors, Associated With Residency In-Training Exam Performance

2021 ◽  
Author(s):  
Michael S Patzkowski ◽  
Joshua M Hauser ◽  
Mark Liu ◽  
Germaine F Herrera ◽  
Krista B Highland ◽  
...  

ABSTRACT Background The anesthesiology in-training exam (ITE) is a 200-item multiple-choice assessment completed annually by physician residents. Because all matriculated U.S. Department of Defense (DoD) anesthesiology residents are “hired” by the DoD after residency graduation, it is important to ensure that ITE performance, as a proxy for core competencies achievement, is maximized. Methods Graduated resident program files from 2013 to 2020 were queried for age, sex, matriculant status (medical student vs. other), medical school (Uniformed Services University vs. other), military service (Army vs. Air Force), preresidency military service (yes vs. no), U.S. Medical Licensing Exam (USMLE) Step 2 Clinical Knowledge (CK) score, and the American Board of Anesthesiologists ITE Score from the third clinical anesthesia year (CA-3 year). Results For every 1-point increase in USMLE Step 2 CK true z-score, the CA-3 ITE z-score increased by 0.59 points. Age was not associated with CA-3 ITE z-score in any dataset regression. Categorical covariates of sex, application status, medical school, service, and preresidency military service were not significantly associated with CA-3 ITE z-score (all P >.05), as shown by estimated adjusted marginal means. The estimated adjusted grand mean of CA-3 ITE z-scores was 0.48 (standard error ± 0.14). Conclusion Resident physicians enter residency with varying degrees of past academic success, and it is important to develop early strategies to support them in acquiring the requisite knowledge base.

2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Amanda C. Filiberto ◽  
Lou Ann Cooper ◽  
Tyler J. Loftus ◽  
Sonja S. Samant ◽  
George A. Sarosi ◽  
...  

Abstract Background Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. Methods This single institution, retrospective cohort analysis included 244 graduates from four classes (2015–2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch’s ANOVA and follow-up pairwise t-tests. Results Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). Conclusions Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 18-23 ◽  
Author(s):  
Steven J. Durning ◽  
Ting Dong ◽  
Paul A. Hemmer ◽  
William R. Gilliland ◽  
David F. Cruess ◽  
...  

ABSTRACT Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 104-108 ◽  
Author(s):  
Anthony R. Artino ◽  
Ting Dong ◽  
David F. Cruess ◽  
William R. Gilliland ◽  
Steven J. Durning

ABSTRACT Background: Using a previously developed postgraduate year (PGY)-1 program director's evaluation survey, we developed a parallel form to assess more senior residents (PGY-3). The PGY-3 survey, which aligns with the core competencies established by the Accreditation Council for Graduate Medical Education, also includes items that reflect our institution's military-unique context. Purpose: To collect feasibility, reliability, and validity evidence for the new PGY-3 evaluation. Methods: We collected PGY-3 data from program directors who oversee the education of military residents. The current study's cohort consisted of Uniformed Services University of the Health Sciences students graduating in 2008, 2009, and 2010. We performed exploratory factor analysis (EFA) to examine the internal structure of the survey and subjected each of the factors identified in the EFA to an internal consistency reliability analysis. We then performed correlation analysis to examine the relationships between PGY-3 ratings and several outcomes: PGY-1 ratings, cumulative medical school grade point average (GPA), and performance on U.S. Medical Licensing Examinations (USMLE) Step 1, Step 2 Clinical Knowledge, and Step 3. Results: Of the 510 surveys we distributed, 388 (76%) were returned. Results from the EFA suggested four factors: “Medical Expertise,” “Professionalism,” “Military-unique Practice,” and “Systems-based Practice.” Scores on these four factors showed good internal consistency reliability, as measured by Cronbach's α (α ranged from 0.92 to 0.98). Further, as expected, “Medical Expertise” and “Professionalism” had small to moderate correlations with cumulative medical school GPA and performance on the USMLE Step examinations. Conclusions: The new program director's evaluation survey instrument developed in this study appears to be feasible, and the scores that emerged have reasonable evidence of reliability and validity in a sample of third-year residents.


Author(s):  
Rachel B. Levine ◽  
Andrew P. Levy ◽  
Robert Lubin ◽  
Sarah Halevi ◽  
Rebeca Rios ◽  
...  

Purpose: United States (US) and Canadian citizens attending medical school abroad often desire to return to the US for residency, and therefore must pass US licensing exams. We describe a 2-day United States Medical Licensing Examination (USMLE) step 2 clinical skills (CS) preparation course for students in the Technion American Medical School program (Haifa, Israel) between 2012 and 2016.Methods: Students completed pre- and post-course questionnaires. The paired t-test was used to measure students’ perceptions of knowledge, preparation, confidence, and competence in CS pre- and post-course. To test for differences by gender or country of birth, analysis of variance was used. We compared USMLE step 2 CS pass rates between the 5 years prior to the course and the 5 years during which the course was offered.Results: Ninety students took the course between 2012 and 2016. Course evaluations began in 2013. Seventy-three students agreed to participate in the evaluation, and 64 completed the pre- and post-course surveys. Of the 64 students, 58% were US-born and 53% were male. Students reported statistically significant improvements in confidence and competence in all areas. No differences were found by gender or country of origin. The average pass rate for the 5 years prior to the course was 82%, and the average pass rate for the 5 years of the course was 89%.Conclusion: A CS course delivered at an international medical school may help to close the gap between the pass rates of US and international medical graduates on a high-stakes licensing exam. More experience is needed to determine if this model is replicable.


2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2004 ◽  
Vol 79 (Supplement) ◽  
pp. S43-S45 ◽  
Author(s):  
Monica M. Cuddy ◽  
Gerard F. Dillon ◽  
Brian E. Clauser ◽  
Kathleen Z. Holtzman ◽  
Melissa J. Margolis ◽  
...  

2014 ◽  
Vol 36 (11) ◽  
pp. 978-982 ◽  
Author(s):  
Yasin Farrokhi-Khajeh-Pasha ◽  
Saharnaz Nedjat ◽  
Aeen Mohammadi ◽  
Elaheh Malakan Rad ◽  
Reza Majdzadeh

Sign in / Sign up

Export Citation Format

Share Document