scholarly journals What factors impact student performance in introductory physics?

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244146
Author(s):  
Eric Burkholder ◽  
Lena Blackmon ◽  
Carl Wieman

In a previous study, we found that students' incoming preparation in physics—crudely measured by concept inventory prescores and math SAT or ACT scores—explains 34% of the variation in Physics 1 final exam scores at Stanford University. In this study, we sought to understand the large variation in exam scores not explained by these measures of incoming preparation. Why are some students’ successful in physics 1 independent of their preparation? To answer this question, we interviewed 34 students with particularly low concept inventory prescores and math SAT/ACT scores about their experiences in the course. We unexpectedly found a set of common practices and attitudes. We found that students’ use of instructional resources had relatively little impact on course performance, while student characteristics, student attitudes, and students’ interactions outside the classroom all had a more substantial impact on course performance. These results offer some guidance as to how instructors might help all students succeed in introductory physics courses.

2018 ◽  
Vol 96 (4) ◽  
pp. 411-419 ◽  
Author(s):  
Nafis I. Karim ◽  
Alexandru Maries ◽  
Chandralekha Singh

We describe the impact of physics education research-based pedagogical techniques in flipped and active-engagement non-flipped courses on student performance on validated conceptual surveys. We compare student performance in courses that make significant use of evidence-based active engagement (EBAE) strategies with courses that primarily use lecture-based (LB) instruction. All courses had large enrollment and often had 100–200 students. The analysis of data for validated conceptual surveys presented here includes data from large numbers of students from two-semester sequences of introductory algebra-based and calculus-based introductory physics courses. The conceptual surveys used to assess student learning in the first and second semester courses were the Force Concept Inventory and the Conceptual Survey of Electricity and Magnetism, respectively. In the research discussed here, the performance of students in EBAE courses at a particular level is compared with LB courses in two situations: (i) the same instructor taught two courses, one of which was a flipped course involving EBAE methods and the other an LB course, while the homework, recitations, and final exams were kept the same; (ii) student performance in all of the EBAE courses taught by different instructors was averaged and compared with LB courses of the same type also averaged over different instructors. In all cases, we find that students in courses that make significant use of active-engagement strategies, on average, outperformed students in courses using primarily LB instruction of the same type on conceptual surveys even though there was no statistically significant difference on the pretest before instruction. We also discuss correlation between the performance on the validated conceptual surveys and the final exam, which typically placed a heavy weight on quantitative problem solving.


2021 ◽  
pp. 009862832110210
Author(s):  
Manda J. Williamson ◽  
Jonah Garbin

Background: Researchers suggest benefits for cooperative learning, but often fail to control for choosing to engage cooperatively, ACT scores or early course performance. Objective: To observe the effects of choosing cooperative work on exam performance in an Introduction to Psychology Course, while controlling for early exam performance and ACT. Method: Data from 261 students assessed the interaction between choice to work cooperatively, alone or being required to work alone while controlling for ACT Score and performance on early tests, respectively. Results: We observed an interaction between Group and ACT on final exam scores, indicating students who worked cooperatively showed the greatest exam benefits at lower ACT scores. Additionally, a trend toward a significant interaction was found between group and early exam performance, indicating a possible benefit for choosing to work cooperatively for low performers. Conclusion: Choosing to engage in cooperative learning may decrease ACT-indicated skill differences and early exam success on final exam performance. Teaching Implications: To decrease the impact of ACT-influenced effects on exam scores, choice to complete cooperative learning activities should be offered in Introduction to Psychology courses.


2021 ◽  
Vol 03 (02) ◽  
pp. 2150007
Author(s):  
James Overduin ◽  
Jacob Buchman ◽  
Jonathan Perry ◽  
Thomas Krause

We report on preliminary results of a statistical study of student performance in more than a decade of calculus-based introductory physics courses. Treating average homework and test grades as proxies for student effort and comprehension, respectively, we plot comprehension versus effort in an academic version of the astronomical Hertzsprung–Russell diagram (which plots stellar luminosity versus temperature). We study the evolution of this diagram with time, finding that the “academic main sequence” has begun to break down in recent years as student achievement on tests has become decoupled from homework grades. We present evidence that this breakdown is likely related to the emergence of easily accessible online solutions to most textbook problems, and discuss possible responses and strategies for maintaining and enhancing student learning in the online era.


2009 ◽  
Vol 87 (8) ◽  
pp. 917-924 ◽  
Author(s):  
Rachel F. Moll ◽  
Marina Milner-Bolotin

This paper examines the effects of computer-based Interactive Lecture Experiments (ILEs) in a large introductory physics course on student academic achievement and attitudes towards physics. ILEs build on interactive lecture demonstrations by requiring students to analyze data during and after lecture demonstrations. Academic achievement was measured using the Force Concept Inventory (FCI) and final examinations' grades; and student attitudes were measured using a Colorado Learning Attitudes about Science Survey (CLASS). FCI results showed a general positive shift (about average for an interactive course) but could not detect improvements in student understanding of specific topics addressed by ILEs. However, open-ended questions on the final exam showed differences between sections on topics that were addressed by ILEs. Attitude survey results showed a negative shift in student attitudes over the semester, which is a typical result for an introductory physics course. This finding suggests that ILE pedagogy alone is insufficient to significantly improve student attitudes toward science. The study also revealed possible improvements to implementing ILEs such as working in groups, ongoing feedback for students, and linking assessment to pedagogical practices.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249086
Author(s):  
Eric Burkholder ◽  
Shima Salehi ◽  
Carl E. Wieman

Providing less prepared students with supplemental instruction (SI) in introductory STEM courses has long been used as a model in math, chemistry, and biology education to improve student performance, but this model has received little attention in physics education research. We analyzed the course performance of students enrolled in SI courses for introductory mechanics and electricity and magnetism (E&M) at Stanford University compared with those not enrolled in the SI courses over a two-year period. We calculated the benefit of the SI course using multiple linear regression to control for students’ level of high school physics and math preparation. We found that the SI course had a significant positive effect on student performance in E&M, but that an SI course with a nearly identical format had no effect on student performance in mechanics. We explored several different potential explanations for why this might be the case and were unable to find any that could explain this difference. This suggests that there are complexities in the design of SI courses that are not fully understood or captured by existing theories as to how they work.


2020 ◽  
Vol 66 (10) ◽  
pp. 1376-1382
Author(s):  
Maria Cristina de Andrade ◽  
Maria Wany Louzada Strufaldi ◽  
Rimarcs Gomes Ferreira ◽  
Gilmar Fernandes do Prado ◽  
Rosana Fiorini Puccini ◽  
...  

SUMMARY OBJECTIVE: To determine whether the scores of the Progress test, the Skills and Attitude test, and the medical internship are correlated with the medical residency exam performance of students who started medical school at the Federal University of São Paulo in 2009 METHODS: The scores of 684 Progress tests from years 1-6 of medical school, 111 Skills and Attitude exams (5th year), 228 performance coefficients for the 5th and 6th years of internship, and 211 scores on the medical residency exam were analyzed longitudinally. Correlations between scores were assessed by Pearson's correlation. Factors associated with medical residency scores were analyzed by linear regression. RESULTS: Scores of Progress tests from years 1-6 and the Skills and Attitude test showed at least one moderate and significant correlation with each other. The theoretical exam and final exam scores in the medical residency had a moderate correlation with performance in the internship. The score of the theoretical medical residency exam was associated with performance in internship year 6 (β=0.833; p<0.001), and the final medical residency exam score was associated with the Skills and Attitude score (β=0.587; p<0.001), 5th-year internship score, (β=0.060; p=0.025), and 6th-year Progress test score (β=0.038; p=0.061). CONCLUSIONS: The scores of these tests showed significant correlations. The medical residency exam scores were positively associated with the student's performance in the internship and on the Skills test, with a tendency for the final medical residency exam score to be associated with the 6th-year Progress test.


2017 ◽  
Vol 31 (2) ◽  
pp. 96-101 ◽  
Author(s):  
Niu Zhang ◽  
Charles N.R. Henderson

Objective: Three hypotheses were tested in a chiropractic education program: (1) Collaborative topic-specific exams during a course would enhance student performance on a noncollaborative final exam administered at the end-of-term, compared to students given traditional (noncollaborative) topic-specific exams during the course. (2) Requiring reasons for answer changes during collaborative topical exams would further enhance final-exam performance. (3) There would be a differential question-type effect on the cumulative final exam, with greater improvement in comprehension question scores compared to simple recall question scores. Methods: A total of 223 students participated in the study. Students were assigned to 1 of 2 study cohorts: (1) control – a traditional, noncollaborative, exam format; (2) collaborative exam only (CEO) – a collaborative format, not requiring answer change justification; and (3) collaborative exam with justification (CEJ) – a collaborative exam format, but requiring justification for answer changes. Results: Contrary to expectation (hypothesis 1), there was no significant difference between control and CEO final exam scores (p = .566). However, CEJ final exam scores were statistically greater (hypothesis 2) than the control (p = .010) and CEO (p = .011) scores. There was greater collaboration benefit when answering comprehension than recall questions during topic-specific exams (p = .000), but this did not differentially influence study cohort final exam scores (p = .571, hypothesis 3). Conclusion: We conclude that test collaboration with the requirement that students explain the reason for making answer changes is a more effective learning tool than simple collaboration that does not require answer change justification.


2020 ◽  
Author(s):  
Jack Eichler ◽  
Grace Henbest ◽  
Kiana Mortezaei ◽  
Teresa Alvelais ◽  
Courtney Murphy

<p>In an ongoing effort to increase student retention and success in the undergraduate general chemistry course sequence, a fully online preparatory chemistry course was developed and implemented at a large public research university. To gain insight about the efficacy of the online course, post-hoc analyses were carried out in which student performance on final exams, and performance in the subsequent general chemistry course were compared between the online cohort and a previous student cohort who completed the preparatory chemistry course in a traditional lecture format. Because the retention of less academically prepared students in STEM majors is a historical problem at the institution in which the online preparatory chemistry course was implemented, post-hoc analyses were also carried out to determine if this at-risk group demonstrated similar achievement relative to the population at large. Multiple linear regression analyses were used to compare final exam scores and general chemistry course grades between the online and in-person student cohorts, while statistically controlling for incoming student academic achievement. Results from these analyses suggest the fully online course led to increased final exam scores in the preparatory course (unstandardized <i>B</i> = 8.648, <i>p</i> < 0.001) and higher grades in the subsequent general chemistry course (unstandardized <i>B</i> = 0.269, <i>p</i> < 0.001). Notably, students from the lowest quartile of incoming academic preparation appear to have been more positively impacted by the online course experience (preparatory chemistry final exam scores: unstandardized <i>B</i> = 11.103, <i>p</i> < 0.001; general chemistry course grades: unstandardized <i>B</i> = 0.323, <i>p</i> = 0.002). These results suggest a fully online course can help improve student preparation for large populations of students, without resulting in a negative achievement gap for less academically prepared students. The structure and implementation of the online course, and the results from the post-hoc analyses will be described herein. </p>


Sign in / Sign up

Export Citation Format

Share Document