scholarly journals Factors associated with student performance on the medical residency test

2020 ◽  
Vol 66 (10) ◽  
pp. 1376-1382
Author(s):  
Maria Cristina de Andrade ◽  
Maria Wany Louzada Strufaldi ◽  
Rimarcs Gomes Ferreira ◽  
Gilmar Fernandes do Prado ◽  
Rosana Fiorini Puccini ◽  
...  

SUMMARY OBJECTIVE: To determine whether the scores of the Progress test, the Skills and Attitude test, and the medical internship are correlated with the medical residency exam performance of students who started medical school at the Federal University of São Paulo in 2009 METHODS: The scores of 684 Progress tests from years 1-6 of medical school, 111 Skills and Attitude exams (5th year), 228 performance coefficients for the 5th and 6th years of internship, and 211 scores on the medical residency exam were analyzed longitudinally. Correlations between scores were assessed by Pearson's correlation. Factors associated with medical residency scores were analyzed by linear regression. RESULTS: Scores of Progress tests from years 1-6 and the Skills and Attitude test showed at least one moderate and significant correlation with each other. The theoretical exam and final exam scores in the medical residency had a moderate correlation with performance in the internship. The score of the theoretical medical residency exam was associated with performance in internship year 6 (β=0.833; p<0.001), and the final medical residency exam score was associated with the Skills and Attitude score (β=0.587; p<0.001), 5th-year internship score, (β=0.060; p=0.025), and 6th-year Progress test score (β=0.038; p=0.061). CONCLUSIONS: The scores of these tests showed significant correlations. The medical residency exam scores were positively associated with the student's performance in the internship and on the Skills test, with a tendency for the final medical residency exam score to be associated with the 6th-year Progress test.

2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2010 ◽  
Vol 3 (7) ◽  
pp. 57-72
Author(s):  
Lester Hadsell ◽  
Raymond MacDermott

An extensive database of exam scores is studied to determine the effects of a grading policy that drops the lowest exam score. We find evidence that some students engage in strategic behavior, either by understudying for one of the exams or missing one altogether, but the vast majority of students show no evidence of strategic behavior. We also find evidence that many students “satisfice”, showing how a large percentage of students passed up an expected improvement in their course grade. We find that the probability a student will choose to complete an optional final exam is inversely related to their grade going into the final. Further, the likelihood of a student completing the final exam rises with the spread between prior exam scores and falls with the points needed to raise their course grade.


2017 ◽  
Vol 31 (2) ◽  
pp. 96-101 ◽  
Author(s):  
Niu Zhang ◽  
Charles N.R. Henderson

Objective: Three hypotheses were tested in a chiropractic education program: (1) Collaborative topic-specific exams during a course would enhance student performance on a noncollaborative final exam administered at the end-of-term, compared to students given traditional (noncollaborative) topic-specific exams during the course. (2) Requiring reasons for answer changes during collaborative topical exams would further enhance final-exam performance. (3) There would be a differential question-type effect on the cumulative final exam, with greater improvement in comprehension question scores compared to simple recall question scores. Methods: A total of 223 students participated in the study. Students were assigned to 1 of 2 study cohorts: (1) control – a traditional, noncollaborative, exam format; (2) collaborative exam only (CEO) – a collaborative format, not requiring answer change justification; and (3) collaborative exam with justification (CEJ) – a collaborative exam format, but requiring justification for answer changes. Results: Contrary to expectation (hypothesis 1), there was no significant difference between control and CEO final exam scores (p = .566). However, CEJ final exam scores were statistically greater (hypothesis 2) than the control (p = .010) and CEO (p = .011) scores. There was greater collaboration benefit when answering comprehension than recall questions during topic-specific exams (p = .000), but this did not differentially influence study cohort final exam scores (p = .571, hypothesis 3). Conclusion: We conclude that test collaboration with the requirement that students explain the reason for making answer changes is a more effective learning tool than simple collaboration that does not require answer change justification.


2020 ◽  
Author(s):  
Jack Eichler ◽  
Grace Henbest ◽  
Kiana Mortezaei ◽  
Teresa Alvelais ◽  
Courtney Murphy

<p>In an ongoing effort to increase student retention and success in the undergraduate general chemistry course sequence, a fully online preparatory chemistry course was developed and implemented at a large public research university. To gain insight about the efficacy of the online course, post-hoc analyses were carried out in which student performance on final exams, and performance in the subsequent general chemistry course were compared between the online cohort and a previous student cohort who completed the preparatory chemistry course in a traditional lecture format. Because the retention of less academically prepared students in STEM majors is a historical problem at the institution in which the online preparatory chemistry course was implemented, post-hoc analyses were also carried out to determine if this at-risk group demonstrated similar achievement relative to the population at large. Multiple linear regression analyses were used to compare final exam scores and general chemistry course grades between the online and in-person student cohorts, while statistically controlling for incoming student academic achievement. Results from these analyses suggest the fully online course led to increased final exam scores in the preparatory course (unstandardized <i>B</i> = 8.648, <i>p</i> < 0.001) and higher grades in the subsequent general chemistry course (unstandardized <i>B</i> = 0.269, <i>p</i> < 0.001). Notably, students from the lowest quartile of incoming academic preparation appear to have been more positively impacted by the online course experience (preparatory chemistry final exam scores: unstandardized <i>B</i> = 11.103, <i>p</i> < 0.001; general chemistry course grades: unstandardized <i>B</i> = 0.323, <i>p</i> = 0.002). These results suggest a fully online course can help improve student preparation for large populations of students, without resulting in a negative achievement gap for less academically prepared students. The structure and implementation of the online course, and the results from the post-hoc analyses will be described herein. </p>


2020 ◽  
Author(s):  
Jack Eichler ◽  
Grace Henbest ◽  
Kiana Mortezaei ◽  
Teresa Alvelais ◽  
Courtney Murphy

<p>In an ongoing effort to increase student retention and success in the undergraduate general chemistry course sequence, a fully online preparatory chemistry course was developed and implemented at a large public research university. To gain insight about the efficacy of the online course, post-hoc analyses were carried out in which student performance on final exams, and performance in the subsequent general chemistry course were compared between the online cohort and a previous student cohort who completed the preparatory chemistry course in a traditional lecture format. Because the retention of less academically prepared students in STEM majors is a historical problem at the institution in which the online preparatory chemistry course was implemented, post-hoc analyses were also carried out to determine if this at-risk group demonstrated similar achievement relative to the population at large. Multiple linear regression analyses were used to compare final exam scores and general chemistry course grades between the online and in-person student cohorts, while statistically controlling for incoming student academic achievement. Results from these analyses suggest the fully online course led to increased final exam scores in the preparatory course (unstandardized <i>B</i> = 8.648, <i>p</i> < 0.001) and higher grades in the subsequent general chemistry course (unstandardized <i>B</i> = 0.269, <i>p</i> < 0.001). Notably, students from the lowest quartile of incoming academic preparation appear to have been more positively impacted by the online course experience (preparatory chemistry final exam scores: unstandardized <i>B</i> = 11.103, <i>p</i> < 0.001; general chemistry course grades: unstandardized <i>B</i> = 0.323, <i>p</i> = 0.002). These results suggest a fully online course can help improve student preparation for large populations of students, without resulting in a negative achievement gap for less academically prepared students. The structure and implementation of the online course, and the results from the post-hoc analyses will be described herein. </p>


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244146
Author(s):  
Eric Burkholder ◽  
Lena Blackmon ◽  
Carl Wieman

In a previous study, we found that students' incoming preparation in physics—crudely measured by concept inventory prescores and math SAT or ACT scores—explains 34% of the variation in Physics 1 final exam scores at Stanford University. In this study, we sought to understand the large variation in exam scores not explained by these measures of incoming preparation. Why are some students’ successful in physics 1 independent of their preparation? To answer this question, we interviewed 34 students with particularly low concept inventory prescores and math SAT/ACT scores about their experiences in the course. We unexpectedly found a set of common practices and attitudes. We found that students’ use of instructional resources had relatively little impact on course performance, while student characteristics, student attitudes, and students’ interactions outside the classroom all had a more substantial impact on course performance. These results offer some guidance as to how instructors might help all students succeed in introductory physics courses.


2011 ◽  
Vol 15 (1) ◽  
Author(s):  
Nanette P. Napier ◽  
Sonal Dekhane ◽  
Stella Smith

This paper describes the conversion of an introductory computing course to the blended learning model at a small, public liberal arts college. Blended learning significantly reduces face-to-face instruction by incorporating rich, online learning experiences. To assess the impact of blended learning on students, survey data was collected at the midpoint and end of semester, and student performance on the final exam was compared in traditional and blended learning sections. To capture faculty perspectives on teaching blended learning courses, written reflections and discussions from faculty teaching blended learning sections were analyzed. Results indicate that student performance in the traditional and blended learning sections of the course were comparable and that students reported high levels of interaction with their instructor. Faculty teaching the course share insights on transitioning to the blended learning format.


2007 ◽  
Vol 34 (3) ◽  
pp. 177-180 ◽  
Author(s):  
R. Eric Landrum

Students in an introductory psychology course took a quiz a week over each textbook chapter, followed by a cumulative final exam. Students missing a quiz in class could make up a quiz at any time during the semester, and answers to quiz items were available to students prior to the cumulative final exam. The cumulative final exam consisted of half the items previously presented on quizzes; half of those items had the response options scrambled. The performance on similar items on the cumulative final was slightly higher than on the original quiz, and scrambling the response options had little effect. Students strongly supported the quiz a week approach.


2016 ◽  
Vol 19 (2) ◽  
pp. 15-31 ◽  
Author(s):  
Yasin Ozarslan ◽  
Ozlem Ozan

AbstractSelf-assessment is vital for online learning since it is one of the most essential skills of distance learners. In this respect, the purpose of this study was to understand learners’ self-assessment quiz taking behaviours in an undergraduate level online course. We tried to figure out whether there is a relation between self-assessment quiz taking behaviours and final exam scores or not. In addition, we investigated how self-assessment quiz taking behaviour differs with respect to learner profile. In line with this purpose, 677 students’ 6092 test events across Project Culture course on Sakai CLE LMS were analyzed. For the analysis of the quantitative data, one-way ANOVA, Chi-Square test of independence, independent-samples t-test and descriptive statistics were utilized. The results revealed that learners who attended self-assessment quizzes regularly had higher final exam scores than others who did not attend those quizzes. Also, they were more satisfied with the course than others study field. In addition, learners who attended selfassessment quizzes regularly had a higher degree of perceived learning. However, number of attempts to those quizzes does not have an effect on final exam scores. On the other hand, a statistically significant relationship was found between attempt number and gender in favour of female learners.


Sign in / Sign up

Export Citation Format

Share Document