scholarly journals Assessing the Validity of the USMLE Step 2 Clinical Knowledge Examination through an Evaluation of its Clinical Relevance

2004 ◽  
Vol 79 (Supplement) ◽  
pp. S43-S45 ◽  
Author(s):  
Monica M. Cuddy ◽  
Gerard F. Dillon ◽  
Brian E. Clauser ◽  
Kathleen Z. Holtzman ◽  
Melissa J. Margolis ◽  
...  
2006 ◽  
Vol 81 (Suppl) ◽  
pp. S21-S24 ◽  
Author(s):  
Polina Harik ◽  
Brian E. Clauser ◽  
Irina Grabovsky ◽  
Melissa J. Margolis ◽  
Gerard F. Dillon ◽  
...  

2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 4-11 ◽  
Author(s):  
Aaron Saguil ◽  
Ting Dong ◽  
Robert J. Gingerich ◽  
Kimberly Swygert ◽  
Jeffrey S. LaRochelle ◽  
...  

ABSTRACT Background: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U.S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice–based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. Methods: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. Results: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. Discussion: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


2015 ◽  
Vol 180 (suppl_4) ◽  
pp. 18-23 ◽  
Author(s):  
Steven J. Durning ◽  
Ting Dong ◽  
Paul A. Hemmer ◽  
William R. Gilliland ◽  
David F. Cruess ◽  
...  

ABSTRACT Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


2020 ◽  
Vol 30 (1) ◽  
pp. 263-269
Author(s):  
Carol Morrison ◽  
Michael Barone ◽  
Gregory Baker ◽  
Linette Ross ◽  
Seohong Pak

2021 ◽  
Author(s):  
Michael S Patzkowski ◽  
Joshua M Hauser ◽  
Mark Liu ◽  
Germaine F Herrera ◽  
Krista B Highland ◽  
...  

ABSTRACT Background The anesthesiology in-training exam (ITE) is a 200-item multiple-choice assessment completed annually by physician residents. Because all matriculated U.S. Department of Defense (DoD) anesthesiology residents are “hired” by the DoD after residency graduation, it is important to ensure that ITE performance, as a proxy for core competencies achievement, is maximized. Methods Graduated resident program files from 2013 to 2020 were queried for age, sex, matriculant status (medical student vs. other), medical school (Uniformed Services University vs. other), military service (Army vs. Air Force), preresidency military service (yes vs. no), U.S. Medical Licensing Exam (USMLE) Step 2 Clinical Knowledge (CK) score, and the American Board of Anesthesiologists ITE Score from the third clinical anesthesia year (CA-3 year). Results For every 1-point increase in USMLE Step 2 CK true z-score, the CA-3 ITE z-score increased by 0.59 points. Age was not associated with CA-3 ITE z-score in any dataset regression. Categorical covariates of sex, application status, medical school, service, and preresidency military service were not significantly associated with CA-3 ITE z-score (all P >.05), as shown by estimated adjusted marginal means. The estimated adjusted grand mean of CA-3 ITE z-scores was 0.48 (standard error ± 0.14). Conclusion Resident physicians enter residency with varying degrees of past academic success, and it is important to develop early strategies to support them in acquiring the requisite knowledge base.


1999 ◽  
Vol 9 (1) ◽  
pp. 5-6
Author(s):  
Carrie Bain ◽  
Nan Bernstein Ratner

Due to the large volume of fluency-related publications since the last column, we have chosen to highlight those articles of highest potential clinical relevance.


2020 ◽  
Vol 5 (4) ◽  
pp. 1026-1038
Author(s):  
Sandra Levey ◽  
Li-Rong Lilly Cheng ◽  
Diana Almodovar

Purpose The purpose of this review article is to present certain linguistic domains to consider in the assessment of children learning a new language. Speech-language pathologists frequently face difficulty when determining if a bilingual or multilingual child possesses a true speech or language disorder. Given the increased number of new language learners across the world, clinicians must understand differences versus disorders to prevent underidentification or overidentification of a disorder. Conclusions Early identification of a true disorder has been shown to prevent language and literacy difficulties, given that children are able to achieve grade-level reading skills when given intervention. Clinical knowledge and skills are strongly required so that children receive evidence-based assessment to support their academic development. Learning Goal Readers will gain an understanding of the factors that support evidence-based assessment of bilingual and multilingual language learners.


2006 ◽  
Vol 175 (4S) ◽  
pp. 86-86
Author(s):  
Roland Bonfig ◽  
Hubertus Riedmiller ◽  
Burkhardt Kneitz ◽  
Philipp Stroebel

Sign in / Sign up

Export Citation Format

Share Document