scholarly journals Requiring students to justify answer changes during collaborative testing may be necessary for improved academic performance*

2017 ◽  
Vol 31 (2) ◽  
pp. 96-101 ◽  
Author(s):  
Niu Zhang ◽  
Charles N.R. Henderson

Objective: Three hypotheses were tested in a chiropractic education program: (1) Collaborative topic-specific exams during a course would enhance student performance on a noncollaborative final exam administered at the end-of-term, compared to students given traditional (noncollaborative) topic-specific exams during the course. (2) Requiring reasons for answer changes during collaborative topical exams would further enhance final-exam performance. (3) There would be a differential question-type effect on the cumulative final exam, with greater improvement in comprehension question scores compared to simple recall question scores. Methods: A total of 223 students participated in the study. Students were assigned to 1 of 2 study cohorts: (1) control – a traditional, noncollaborative, exam format; (2) collaborative exam only (CEO) – a collaborative format, not requiring answer change justification; and (3) collaborative exam with justification (CEJ) – a collaborative exam format, but requiring justification for answer changes. Results: Contrary to expectation (hypothesis 1), there was no significant difference between control and CEO final exam scores (p = .566). However, CEJ final exam scores were statistically greater (hypothesis 2) than the control (p = .010) and CEO (p = .011) scores. There was greater collaboration benefit when answering comprehension than recall questions during topic-specific exams (p = .000), but this did not differentially influence study cohort final exam scores (p = .571, hypothesis 3). Conclusion: We conclude that test collaboration with the requirement that students explain the reason for making answer changes is a more effective learning tool than simple collaboration that does not require answer change justification.

2013 ◽  
Vol 37 (4) ◽  
pp. 370-376 ◽  
Author(s):  
Andrew R. Thompson ◽  
Mark W. Braun ◽  
Valerie D. O'Loughlin

Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school pathology course. We investigated this by comparing pathology exam scores between students exposed to traditional discipline-specific instruction and exams (DSE) versus integrated instruction and exams (IE). Exam content was controlled, and individual questions were evaluated using a modified version of Bloom's taxonomy. Additionally, we compared United States Medical Licensing Examination (USMLE) step 1 scores between DSE and IE groups. Our findings indicate that DSE students performed better than IE students on complete pathology exams. However, when exam content was controlled, exam scores were equivalent between groups. We also discovered that the integrated exams were composed of a significantly greater proportion of questions classified on the higher levels of Bloom's taxonomy and that IE students performed better on these questions overall. USMLE step 1 exam scores were similar between groups. The finding of a significant difference in content complexity between discipline-specific and integrated exams adds to recent literature indicating that there are a number of potential biases related to curricular comparison studies that must be considered. Future investigation involving larger sample sizes and multiple disciplines should be performed to explore this matter further.


2018 ◽  
Vol 7 (1) ◽  
pp. 93
Author(s):  
Victoria Ingalls

Many studies have argued the negative effects of external rewards on internal motivation while others assert that external motivation does not necessarily undermine intrinsic motivation. At a private university, students were given the option to earn bonus points for achieving mastery in the online homework systems associated with Statistics and Pre-Calculus courses. The results showed a significant difference in online homework grades and final exam scores, dependent upon when the incentive was given. The findings of this research suggest that college students thrive when incentivized. When compared to the students who were not incentivized, the incentivized group had a statistically significantly higher mean for both online homework scores and final exam scores. Many of the incentivized students chose to take the opportunity to earn the bonus points to increase the final semester grade, which apparently also helped to increase the content knowledge necessary for the final exam.


2021 ◽  
Vol 13 (16) ◽  
pp. 8735
Author(s):  
Juan Luis Martín Ayala ◽  
Sergio Castaño Castaño ◽  
Alba Hernández Santana ◽  
Mariacarla Martí González ◽  
Julién Brito Ballester

The COVID-19 pandemic, and the containment measures adopted by the different governments, led to a boom in online education as a necessary response to the crisis posed against the education system worldwide. This study compares the academic performance of students between face-to-face and online modalities in relation to the exceptional situation between the months of March and June 2020. The academic performance in both modalities of a series of subjects taught in the Psychology Degree at the European University of the Atlantic (Santander, Spain) was taken into account. The results show that student performance during the final exam in the online modality is significantly lower than in the face-to-face modality. However, grades from the continuous evaluation activities are significantly higher online, which somehow compensates the overall grade of the course, with no significant difference in the online mode with respect to the face-to-face mode, even though overall performance is higher in the latter. The conditioning factors and explanatory arguments for these results are also discussed.


2018 ◽  
Vol 96 (4) ◽  
pp. 411-419 ◽  
Author(s):  
Nafis I. Karim ◽  
Alexandru Maries ◽  
Chandralekha Singh

We describe the impact of physics education research-based pedagogical techniques in flipped and active-engagement non-flipped courses on student performance on validated conceptual surveys. We compare student performance in courses that make significant use of evidence-based active engagement (EBAE) strategies with courses that primarily use lecture-based (LB) instruction. All courses had large enrollment and often had 100–200 students. The analysis of data for validated conceptual surveys presented here includes data from large numbers of students from two-semester sequences of introductory algebra-based and calculus-based introductory physics courses. The conceptual surveys used to assess student learning in the first and second semester courses were the Force Concept Inventory and the Conceptual Survey of Electricity and Magnetism, respectively. In the research discussed here, the performance of students in EBAE courses at a particular level is compared with LB courses in two situations: (i) the same instructor taught two courses, one of which was a flipped course involving EBAE methods and the other an LB course, while the homework, recitations, and final exams were kept the same; (ii) student performance in all of the EBAE courses taught by different instructors was averaged and compared with LB courses of the same type also averaged over different instructors. In all cases, we find that students in courses that make significant use of active-engagement strategies, on average, outperformed students in courses using primarily LB instruction of the same type on conceptual surveys even though there was no statistically significant difference on the pretest before instruction. We also discuss correlation between the performance on the validated conceptual surveys and the final exam, which typically placed a heavy weight on quantitative problem solving.


2011 ◽  
Vol 1 (2) ◽  
pp. 5 ◽  
Author(s):  
Bahaudin Mujtaba

This study documents learning and student performance through objective tests with graduate students in Kingston-Jamaica and compares the final exam results with students taking the same course, the same test, with the same instructor at different sites throughout the United States and in the Nassau cluster, Grand Bahamas. The scores are further compared with students who completed this course and final exam in the online format. The group of Jamaican, Bahamian and students in Tampa completing this course received traditional, face-to-face instruction in a classroom setting, with classes delivered in a weekend format with 32 face-to-face contact hours during the semester. As expected, findings revealed that there was a statistically significant difference (% = .05) in the mean test scores of the pre-test and post-test for this group of students enrolled at the Kingston cluster.Furthermore, the results of final exam comparison with similar groups in the United States and Bahamas showed no significant differences. The comparison of student performance in Kingston with online students is also discussed.Overall, it is concluded that many of the learning outcomes designed to be achieved as a result of the course activities, specifically the final exam, were achieved consistently for students taking this course with the assigned faculty member in Jamaica, the United States and the Grand Bahamas.


2011 ◽  
Vol 2 (3) ◽  
pp. 33
Author(s):  
Bahaudin G. Mujtaba ◽  
Jean McAtavey

The purposes of the study were to assess and compare learning gained in a masters of science in human resources course entitled Management Communication and to measure performance through an objective pre-test and post-test examination with students pursuing their degree at a cluster site in Kingston, Jamaica, away from the main campus with those at the campus (Fort Lauderdale, Florida).These students were completing this graduate course in the summer term of 2005 and received traditional, face-to-face instruction in a classroom setting, with classes delivered in a weekend format during the term. Two different instructors taught the class using the same performance measure for comparison purposes. Student performance for the purpose of this study was defined as the score on the pre-test and on the final examination (post-test).Findings revealed that there was a statistically significant difference (alpha = .05) in the pre-test and post-test scores of students enrolled at the Kingston of Jamaica cluster and those at the Main Campus. Both groups had significant gains in the pre- and post-test examinations. Furthermore, the overall performance of students in Jamaica seems to be equivalent to the performance of students at the Main Campus when the classes are taught by two different faculty members who used the same final exam questions.


2020 ◽  
Vol 66 (10) ◽  
pp. 1376-1382
Author(s):  
Maria Cristina de Andrade ◽  
Maria Wany Louzada Strufaldi ◽  
Rimarcs Gomes Ferreira ◽  
Gilmar Fernandes do Prado ◽  
Rosana Fiorini Puccini ◽  
...  

SUMMARY OBJECTIVE: To determine whether the scores of the Progress test, the Skills and Attitude test, and the medical internship are correlated with the medical residency exam performance of students who started medical school at the Federal University of São Paulo in 2009 METHODS: The scores of 684 Progress tests from years 1-6 of medical school, 111 Skills and Attitude exams (5th year), 228 performance coefficients for the 5th and 6th years of internship, and 211 scores on the medical residency exam were analyzed longitudinally. Correlations between scores were assessed by Pearson's correlation. Factors associated with medical residency scores were analyzed by linear regression. RESULTS: Scores of Progress tests from years 1-6 and the Skills and Attitude test showed at least one moderate and significant correlation with each other. The theoretical exam and final exam scores in the medical residency had a moderate correlation with performance in the internship. The score of the theoretical medical residency exam was associated with performance in internship year 6 (β=0.833; p<0.001), and the final medical residency exam score was associated with the Skills and Attitude score (β=0.587; p<0.001), 5th-year internship score, (β=0.060; p=0.025), and 6th-year Progress test score (β=0.038; p=0.061). CONCLUSIONS: The scores of these tests showed significant correlations. The medical residency exam scores were positively associated with the student's performance in the internship and on the Skills test, with a tendency for the final medical residency exam score to be associated with the 6th-year Progress test.


Author(s):  
Tahyna Hernandez ◽  
Margret S. Magid ◽  
Alexandros D. Polydorides

Context.— Evaluation of medical curricula includes appraisal of student assessments in order to encourage deeper learning approaches. General pathology is our institution's 4-week, first-year course covering universal disease concepts (inflammation, neoplasia, etc). Objective.— To compare types of assessment questions and determine which characteristics may predict student scores, degree of difficulty, and item discrimination. Design.— Item-level analysis was employed to categorize questions along the following variables: type (multiple choice question or matching answer), presence of clinical vignette (if so, whether simple or complex), presence of specimen image, information depth (simple recall or interpretation), knowledge density (first or second order), Bloom taxonomy level (1–3), and, for the final, subject familiarity (repeated concept and, if so, whether verbatim). Results.— Assessments comprised 3 quizzes and 1 final exam (total 125 questions), scored during a 3-year period (total 417 students) for a total 52 125 graded attempts. Overall, 44 890 attempts (86.1%) were correct. In multivariate analysis, question type emerged as the most significant predictor of student performance, degree of difficulty, and item discrimination, with multiple choice questions being significantly associated with lower mean scores (P = .004) and higher degree of difficulty (P = .02), but also, paradoxically, poorer discrimination (P = .002). The presence of a specimen image was significantly associated with better discrimination (P = .04), and questions requiring data interpretation (versus simple recall) were significantly associated with lower mean scores (P = .003) and a higher degree of difficulty (P = .046). Conclusions.— Assessments in medical education should comprise combinations of questions with various characteristics in order to encourage better student performance, but also obtain optimal degrees of difficulty and levels of item discrimination.


2020 ◽  
Author(s):  
Jack Eichler ◽  
Grace Henbest ◽  
Kiana Mortezaei ◽  
Teresa Alvelais ◽  
Courtney Murphy

<p>In an ongoing effort to increase student retention and success in the undergraduate general chemistry course sequence, a fully online preparatory chemistry course was developed and implemented at a large public research university. To gain insight about the efficacy of the online course, post-hoc analyses were carried out in which student performance on final exams, and performance in the subsequent general chemistry course were compared between the online cohort and a previous student cohort who completed the preparatory chemistry course in a traditional lecture format. Because the retention of less academically prepared students in STEM majors is a historical problem at the institution in which the online preparatory chemistry course was implemented, post-hoc analyses were also carried out to determine if this at-risk group demonstrated similar achievement relative to the population at large. Multiple linear regression analyses were used to compare final exam scores and general chemistry course grades between the online and in-person student cohorts, while statistically controlling for incoming student academic achievement. Results from these analyses suggest the fully online course led to increased final exam scores in the preparatory course (unstandardized <i>B</i> = 8.648, <i>p</i> < 0.001) and higher grades in the subsequent general chemistry course (unstandardized <i>B</i> = 0.269, <i>p</i> < 0.001). Notably, students from the lowest quartile of incoming academic preparation appear to have been more positively impacted by the online course experience (preparatory chemistry final exam scores: unstandardized <i>B</i> = 11.103, <i>p</i> < 0.001; general chemistry course grades: unstandardized <i>B</i> = 0.323, <i>p</i> = 0.002). These results suggest a fully online course can help improve student preparation for large populations of students, without resulting in a negative achievement gap for less academically prepared students. The structure and implementation of the online course, and the results from the post-hoc analyses will be described herein. </p>


2016 ◽  
Vol 30 (2) ◽  
pp. 87-93 ◽  
Author(s):  
Niu Zhang ◽  
Charles N.R. Henderson

Objective: The objective of this study was to evaluate the academic impact of cooperative peer instruction during lecture pauses in an immunology/endocrinology course. Methods: Third-quarter students participated across iterations of the course. Each class offered 20 lectures of 50 minutes each. Classes were divided into a peer-instruction group incorporating cooperative peer instruction and a control group receiving traditional lectures. Peer-instruction group lectures were divided into 2–3 short presentations followed by a multiple-choice question (MCQ). Students recorded an initial answer and then had 1 minute to discuss answers with group peers. Following this, students could submit a revised answer. The control group received the same lecture material, but without MCQs or peer discussions. Final-exam scores were compared across study groups. A mixed-design analysis of covariance was used to analyze the data. Results: There was a statistically significant main effect for the peer-instruction activity (F(1, 93) = 6.573, p = .012, r = .257), with recall scores higher for MCQs asked after peer-instruction activities than for those asked before peer instruction. Final-exam scores at the end of term were greater in the peer-instruction group than the control group (F(1, 193) = 9.264, p = .003, r = .214; question type, F(1, 193) = 26.671, p = .000, r = .348). Conclusion: Lectures with peer-instruction pauses increase student recall and comprehension compared with traditional lectures.


Sign in / Sign up

Export Citation Format

Share Document