scholarly journals Does the MCAT Predict Medical School and PGY-1 Performance?

2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.

2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 18-23 ◽  
Author(s):  
Steven J. Durning ◽  
Ting Dong ◽  
Paul A. Hemmer ◽  
William R. Gilliland ◽  
David F. Cruess ◽  
...  

ABSTRACT Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Amanda C. Filiberto ◽  
Lou Ann Cooper ◽  
Tyler J. Loftus ◽  
Sonja S. Samant ◽  
George A. Sarosi ◽  
...  

Abstract Background Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. Methods This single institution, retrospective cohort analysis included 244 graduates from four classes (2015–2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch’s ANOVA and follow-up pairwise t-tests. Results Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). Conclusions Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.


2014 ◽  
Vol 38 (4) ◽  
pp. 315-320 ◽  
Author(s):  
Teresa R. Johnson ◽  
Mohammed K. Khalil ◽  
Richard D. Peppler ◽  
Diane D. Davey ◽  
Jonathan D. Kibble

In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models.


2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2018 ◽  
Vol 129 (2) ◽  
pp. 282-289 ◽  
Author(s):  
Susan R. Durham ◽  
Katelyn Donaldson ◽  
M. Sean Grady ◽  
Deborah L. Benzil

OBJECTIVEWith nearly half of graduating US medical students being female, it is imperative to understand why females typically make up less than 20% of the neurosurgery applicant pool, a number that has changed very slowly over the past several decades. Organized neurosurgery has strongly indicated the desire to overcome the underrepresentation of women, and it is critical to explore whether females are at a disadvantage during the residency application process, one of the first steps in a neurosurgical career. To date, there are no published studies on specific applicant characteristics, including gender, that are associated with match outcome among neurosurgery resident applicants. The purpose of this study is to determine which characteristics of neurosurgery residency applicants, including gender, are associated with a successful match outcome.METHODSDe-identified neurosurgical resident applicant data obtained from the San Francisco Fellowship and Residency Matching Service for the years 1990–2007 were analyzed. Applicant characteristics including gender, medical school attended, year of application, United States Medical Licensing Exam (USMLE) Step 1 score, Alpha Omega Alpha (AOA) status, and match outcome were available for study.RESULTSOf the total 3426 applicants studied, 473 (13.8%) applicants were female and 2953 (86.2%) were male. Two thousand four hundred forty-eight (71.5%) applicants successfully matched. USMLE Step 1 score was the strongest predictor of match outcome with scores > 245 having an OR of 20.84 (95% CI 10.31–42.12) compared with those scoring < 215. The mean USMLE Step 1 score for applicants who successfully matched was 233.2 and was 210.8 for those applicants who did not match (p < 0.001). Medical school rank was also associated with match outcome (p < 0.001). AOA status was not significantly associated with match outcome. Female gender was associated with significantly lower odds of matching in both simple (OR 0.59, 95% CI 0.48–0.72) and multivariate analyses (OR 0.57, 95% CI 0.34–0.94 CI). USMLE Step 1 scores were significantly lower for females compared to males with a mean score of 230.1 for males and 221.5 for females (p < 0.001). There was no significant difference in medical school ranking or AOA status when stratified by applicant gender.CONCLUSIONSThe limited historical applicant data from 1990–2007 suggests that USMLE Step 1 score is the best predictor of match outcome, although applicant gender may also play a role.


2019 ◽  
Vol 29 (4) ◽  
pp. 1141-1145 ◽  
Author(s):  
Gary L. Beck Dallaghan ◽  
Julie Story Byerley ◽  
Neva Howard ◽  
William C. Bennett ◽  
Kurt O. Gilliland

Sign in / Sign up

Export Citation Format

Share Document