scholarly journals Validity of Four Approaches of Using Repeatersʼ MCAT Scores in Medical School Admissions to Predict USMLE Step 1 Total Scores

2010 ◽  
Vol 85 ◽  
pp. S64-S67 ◽  
Author(s):  
Xiaohui Zhao ◽  
Scott Oppler ◽  
Dana Dunleavy ◽  
Marc Kroopnick
2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2018 ◽  
Vol 129 (2) ◽  
pp. 282-289 ◽  
Author(s):  
Susan R. Durham ◽  
Katelyn Donaldson ◽  
M. Sean Grady ◽  
Deborah L. Benzil

OBJECTIVEWith nearly half of graduating US medical students being female, it is imperative to understand why females typically make up less than 20% of the neurosurgery applicant pool, a number that has changed very slowly over the past several decades. Organized neurosurgery has strongly indicated the desire to overcome the underrepresentation of women, and it is critical to explore whether females are at a disadvantage during the residency application process, one of the first steps in a neurosurgical career. To date, there are no published studies on specific applicant characteristics, including gender, that are associated with match outcome among neurosurgery resident applicants. The purpose of this study is to determine which characteristics of neurosurgery residency applicants, including gender, are associated with a successful match outcome.METHODSDe-identified neurosurgical resident applicant data obtained from the San Francisco Fellowship and Residency Matching Service for the years 1990–2007 were analyzed. Applicant characteristics including gender, medical school attended, year of application, United States Medical Licensing Exam (USMLE) Step 1 score, Alpha Omega Alpha (AOA) status, and match outcome were available for study.RESULTSOf the total 3426 applicants studied, 473 (13.8%) applicants were female and 2953 (86.2%) were male. Two thousand four hundred forty-eight (71.5%) applicants successfully matched. USMLE Step 1 score was the strongest predictor of match outcome with scores > 245 having an OR of 20.84 (95% CI 10.31–42.12) compared with those scoring < 215. The mean USMLE Step 1 score for applicants who successfully matched was 233.2 and was 210.8 for those applicants who did not match (p < 0.001). Medical school rank was also associated with match outcome (p < 0.001). AOA status was not significantly associated with match outcome. Female gender was associated with significantly lower odds of matching in both simple (OR 0.59, 95% CI 0.48–0.72) and multivariate analyses (OR 0.57, 95% CI 0.34–0.94 CI). USMLE Step 1 scores were significantly lower for females compared to males with a mean score of 230.1 for males and 221.5 for females (p < 0.001). There was no significant difference in medical school ranking or AOA status when stratified by applicant gender.CONCLUSIONSThe limited historical applicant data from 1990–2007 suggests that USMLE Step 1 score is the best predictor of match outcome, although applicant gender may also play a role.


2019 ◽  
Vol 29 (4) ◽  
pp. 1141-1145 ◽  
Author(s):  
Gary L. Beck Dallaghan ◽  
Julie Story Byerley ◽  
Neva Howard ◽  
William C. Bennett ◽  
Kurt O. Gilliland

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ling Wang ◽  
Heather S. Laird-Fick ◽  
Carol J. Parker ◽  
David Solomon

Abstract Background Medical students must meet curricular expectations and pass national licensing examinations to become physicians. However, no previous studies explicitly modeled stages of medical students acquiring basic science knowledge. In this study, we employed an innovative statistical model to characterize students’ growth using progress testing results over time and predict licensing examination performance. Methods All students matriculated from 2016 to 2017 in our medical school with USMLE Step 1 test scores were included in this retrospective cohort study (N = 358). Markov chain method was employed to: 1) identify latent states of acquiring scientific knowledge based on progress tests and 2) estimate students’ transition probabilities between states. The primary outcome of this study, United States Medical Licensing Examination (USMLE) Step 1 performance, were predicted based on students’ estimated probabilities in each latent state identified by Markov chain model. Results Four latent states were identified based on students’ progress test results: Novice, Advanced Beginner I, Advanced Beginner II and Competent States. At the end of the first year, students predicted to remain in the Novice state had lower mean Step 1 scores compared to those in the Competent state (209, SD = 14.8 versus 255, SD = 10.8 respectively) and had more first attempt failures (11.5% versus 0%). On regression analysis, it is found that at the end of the first year, if there was 10% higher chance staying in Novice State, Step 1 scores will be predicted 2.0 points lower (95% CI: 0.85–2.81 with P < .01); while 10% higher chance in Competent State, Step 1scores will be predicted 4.3 points higher (95% CI: 2.92–5.19 with P < .01). Similar findings were also found at the end of second year medical school. Conclusions Using the Markov chain model to analyze longitudinal progress test performance offers a flexible and effective estimation method to identify students’ transitions across latent stages for acquiring scientific knowledge. The results can help identify students who are at-risk for licensing examination failure and may benefit from targeted academic support.


2010 ◽  
Vol 2 (3) ◽  
pp. 316-321 ◽  
Author(s):  
Jeremy R. Rinard ◽  
Ben D. Garol ◽  
Ashvin B. Shenoy ◽  
Raman C. Mahabir

Abstract Objective We explored the impact that attributes of US medical school seniors have on their success in matching to a surgical residency, in order to analyze trends for National Resident Matching Program (NRMP) match outcomes in surgical specialties between 2007 and 2009. Methods Using Electronic Residency Application Service data and NRMP outcomes, we analyzed medical students' attributes and their effect in successfully matching students into residency positions in surgery, otolaryngology, orthopedic surgery, plastic surgery, and obstetrics and gynecology. Attributes analyzed included self-reported United States Medical Licensing Examination (USMLE) Step 1 and Step 2 scores, Alpha Omega Alpha (AOA) Honor Medical Society membership, research experience, additional graduate degree status, and graduation from a top 40 National Institutes of Health (NIH)–funded medical school. Odds ratios were calculated for each criterion, and 95% confidence intervals were used to determine significance. Results Between 2007 and 2009, the number of surgical specialty residency positions increased by 86, and the number of applicants increased by 34. Membership in AOA, USMLE Step 1 and Step 2 scores, research experience, and graduation from a top 40 NIH-funded medical school frequently had a significant impact on residents successfully matching into many specialties, while additional graduate degrees had no effect on matching into surgical specialties (range 0.64 to 1.2). Conclusions Although the statistical significance varied across specialties, higher USMLE Step 1 and Step 2 scores, AOA membership, research experience, and graduation from a top 40 NIH-funded medical school generally had a positive impact on match success to surgical residency for US allopathic seniors. Test preparation and seeking research experience during undergraduate medical education may be effective approaches for increasing the likelihood of success for US seniors matching into a surgical specialty.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Amanda C. Filiberto ◽  
Lou Ann Cooper ◽  
Tyler J. Loftus ◽  
Sonja S. Samant ◽  
George A. Sarosi ◽  
...  

Abstract Background Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. Methods This single institution, retrospective cohort analysis included 244 graduates from four classes (2015–2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch’s ANOVA and follow-up pairwise t-tests. Results Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). Conclusions Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 18-23 ◽  
Author(s):  
Steven J. Durning ◽  
Ting Dong ◽  
Paul A. Hemmer ◽  
William R. Gilliland ◽  
David F. Cruess ◽  
...  

ABSTRACT Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


2014 ◽  
Vol 38 (4) ◽  
pp. 315-320 ◽  
Author(s):  
Teresa R. Johnson ◽  
Mohammed K. Khalil ◽  
Richard D. Peppler ◽  
Diane D. Davey ◽  
Jonathan D. Kibble

In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models.


Sign in / Sign up

Export Citation Format

Share Document