Use of the NBME Comprehensive Basic Science Examination as a progress test in the preclerkship curriculum of a new medical school

2014 ◽  
Vol 38 (4) ◽  
pp. 315-320 ◽  
Author(s):  
Teresa R. Johnson ◽  
Mohammed K. Khalil ◽  
Richard D. Peppler ◽  
Diane D. Davey ◽  
Jonathan D. Kibble

In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ling Wang ◽  
Heather S. Laird-Fick ◽  
Carol J. Parker ◽  
David Solomon

Abstract Background Medical students must meet curricular expectations and pass national licensing examinations to become physicians. However, no previous studies explicitly modeled stages of medical students acquiring basic science knowledge. In this study, we employed an innovative statistical model to characterize students’ growth using progress testing results over time and predict licensing examination performance. Methods All students matriculated from 2016 to 2017 in our medical school with USMLE Step 1 test scores were included in this retrospective cohort study (N = 358). Markov chain method was employed to: 1) identify latent states of acquiring scientific knowledge based on progress tests and 2) estimate students’ transition probabilities between states. The primary outcome of this study, United States Medical Licensing Examination (USMLE) Step 1 performance, were predicted based on students’ estimated probabilities in each latent state identified by Markov chain model. Results Four latent states were identified based on students’ progress test results: Novice, Advanced Beginner I, Advanced Beginner II and Competent States. At the end of the first year, students predicted to remain in the Novice state had lower mean Step 1 scores compared to those in the Competent state (209, SD = 14.8 versus 255, SD = 10.8 respectively) and had more first attempt failures (11.5% versus 0%). On regression analysis, it is found that at the end of the first year, if there was 10% higher chance staying in Novice State, Step 1 scores will be predicted 2.0 points lower (95% CI: 0.85–2.81 with P < .01); while 10% higher chance in Competent State, Step 1scores will be predicted 4.3 points higher (95% CI: 2.92–5.19 with P < .01). Similar findings were also found at the end of second year medical school. Conclusions Using the Markov chain model to analyze longitudinal progress test performance offers a flexible and effective estimation method to identify students’ transitions across latent stages for acquiring scientific knowledge. The results can help identify students who are at-risk for licensing examination failure and may benefit from targeted academic support.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Amanda C. Filiberto ◽  
Lou Ann Cooper ◽  
Tyler J. Loftus ◽  
Sonja S. Samant ◽  
George A. Sarosi ◽  
...  

Abstract Background Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. Methods This single institution, retrospective cohort analysis included 244 graduates from four classes (2015–2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch’s ANOVA and follow-up pairwise t-tests. Results Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). Conclusions Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.


Author(s):  
Rachel B. Levine ◽  
Andrew P. Levy ◽  
Robert Lubin ◽  
Sarah Halevi ◽  
Rebeca Rios ◽  
...  

Purpose: United States (US) and Canadian citizens attending medical school abroad often desire to return to the US for residency, and therefore must pass US licensing exams. We describe a 2-day United States Medical Licensing Examination (USMLE) step 2 clinical skills (CS) preparation course for students in the Technion American Medical School program (Haifa, Israel) between 2012 and 2016.Methods: Students completed pre- and post-course questionnaires. The paired t-test was used to measure students’ perceptions of knowledge, preparation, confidence, and competence in CS pre- and post-course. To test for differences by gender or country of birth, analysis of variance was used. We compared USMLE step 2 CS pass rates between the 5 years prior to the course and the 5 years during which the course was offered.Results: Ninety students took the course between 2012 and 2016. Course evaluations began in 2013. Seventy-three students agreed to participate in the evaluation, and 64 completed the pre- and post-course surveys. Of the 64 students, 58% were US-born and 53% were male. Students reported statistically significant improvements in confidence and competence in all areas. No differences were found by gender or country of origin. The average pass rate for the 5 years prior to the course was 82%, and the average pass rate for the 5 years of the course was 89%.Conclusion: A CS course delivered at an international medical school may help to close the gap between the pass rates of US and international medical graduates on a high-stakes licensing exam. More experience is needed to determine if this model is replicable.


2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2016 ◽  
Vol 8 (3) ◽  
pp. 358-363 ◽  
Author(s):  
Jeanne M. Sandella ◽  
John R. Gimpel ◽  
Larissa L. Smith ◽  
John R. Boulet

ABSTRACT  The Comprehensive Osteopathic Medical Licensing Examination (COMLEX-USA) and the United States Medical Licensing Examination (USMLE) are recognized by all state medical licensing boards in the United States, and the Federation of State Medical Boards has supported the validity of both examinations for medical licensure. Many osteopathic medical students take both examinations.Background  The purpose of this study was to investigate performance on COMLEX-USA Level 1 and USMLE Step 1 of students from colleges of osteopathic medicine where the majority of students took both examinations.Objective  Data were collected on the entering classes of 2010 and 2011. Relationships between the COMLEX-USA Level 1 and the USMLE Step 1 were quantified using Pearson correlations. The correlation between outcomes on the 2 examinations was evaluated using the phi coefficient. A contingency table was constructed to look at first-attempt outcomes (pass/fail).Methods  Data for 2010 and 2011 were collected from 3 osteopathic medical schools, with 795 of 914 students (87%) taking both examinations. The correlation between first-attempt COMLEX-USA Level 1 and USMLE Step 1 scores was statistically significant across and within all 3 schools. The overall correlation was r(795) = 0.84 (P &lt; .001). Pass/fail status on the 2 examinations was moderately correlated (ϕ = 0.39, P &lt; .01).Results  Our study found a strong association between COMLEX Level 1 and USMLE Step 1 performance. Additional studies to accurately compare scores on these examinations are warranted.Conclusions


1999 ◽  
Vol 74 (10) ◽  
pp. S7-9 ◽  
Author(s):  
D P Way ◽  
B Biagi ◽  
K Clausen ◽  
A Hudson

10.2196/20182 ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. e20182
Author(s):  
Benjamin Liu

In recent years, US medical students have been increasingly absent from medical school classrooms. They do so to maximize their competitiveness for a good residency program, by achieving high scores on the United States Medical Licensing Examination (USMLE) Step 1. As a US medical student, I know that most of these class-skipping students are utilizing external learning resources, which are perceived to be more efficient than traditional lectures. Now that the USMLE Step 1 is adopting a pass/fail grading system, it may be tempting to expect students to return to traditional basic science lectures. Unfortunately, my experiences tell me this will not happen. Instead, US medical schools must adapt their curricula. These new curricula should focus on clinical decision making, team-based learning, and new medical decision technologies, while leveraging the validated ability of these external resources to teach the basic sciences. In doing so, faculty will not only increase student engagement but also modernize the curricula to meet new standards on effective medical learning.


Sign in / Sign up

Export Citation Format

Share Document