scholarly journals Small Changes: Using Assessment to Direct Instructional Practices in Large-Enrollment Biochemistry Courses

2017 ◽  
Vol 16 (1) ◽  
pp. ar7 ◽  
Author(s):  
Xiaoying Xu ◽  
Jennifer E. Lewis ◽  
Jennifer Loertscher ◽  
Vicky Minderhout ◽  
Heather L. Tienson

Multiple-choice assessments provide a straightforward way for instructors of large classes to collect data related to student understanding of key concepts at the beginning and end of a course. By tracking student performance over time, instructors receive formative feedback about their teaching and can assess the impact of instructional changes. The evidence of instructional effectiveness can in turn inform future instruction, and vice versa. In this study, we analyzed student responses on an optimized pretest and posttest administered during four different quarters in a large-enrollment biochemistry course. Student performance and the effect of instructional interventions related to three fundamental concepts—hydrogen bonding, bond energy, and pKa—were analyzed. After instructional interventions, a larger proportion of students demonstrated knowledge of these concepts compared with data collected before instructional interventions. Student responses trended from inconsistent to consistent and from incorrect to correct. The instructional effect was particularly remarkable for the later three quarters related to hydrogen bonding and bond energy. This study supports the use of multiple-choice instruments to assess the effectiveness of instructional interventions, especially in large classes, by providing instructors with quick and reliable feedback on student knowledge of each specific fundamental concept.

2007 ◽  
Vol 6 (2) ◽  
pp. 163-171 ◽  
Author(s):  
Norris Armstrong ◽  
Shu-Mei Chang ◽  
Marguerite Brickman

This study examined the impact of cooperative learning activities on student achievement and attitudes in large-enrollment (>250) introductory biology classes. We found that students taught using a cooperative learning approach showed greater improvement in their knowledge of course material compared with students taught using a traditional lecture format. In addition, students viewed cooperative learning activities highly favorably. These findings suggest that encouraging students to work in small groups and improving feedback between the instructor and the students can help to improve student outcomes even in very large classes. These results should be viewed cautiously, however, until this experiment can be replicated with additional faculty. Strategies for potentially improving the impact of cooperative learning on student achievement in large courses are discussed.


Author(s):  
Jacqueline A. Carnegie

Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ) exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when preparing multiple exam versions is that they must be equivalent in their assessment of student knowledge. This project investigated a possible influence of correct answer organization on student answer selection when writing multiple versions of MCQ exams. The specific question asked was whether the existence of a series of four to five consecutive MCQs in which the same letter represented the correct answer had a detrimental influence on a student’s ability to continue to select the correct answer as he/she moved through that series. Student outcomes from such exams were compared with results from exams with identical questions but which did not contain such series. These findings were supplemented by student survey data in which students self-assessed the extent to which they paid attention to the distribution of correct answer choices when writing summative exams, both during their initial answer selection and when transferring their answer letters to the Scantron sheet for correction. Despite the fact that more than half of survey respondents indicated that they do make note of answer patterning during exams and that a series of four to five questions with the same letter for the correct answer would encourage many of them to take a second look at their answer choice, the results pertaining to student outcomes suggest that MCQ randomization, even when it does result in short serial arrays of letter-specific correct answers, does not constitute a distraction capable of adversely influencing student performance. Dans les très grandes classes de cours de première et deuxième années, l’évaluation sommative se déroule souvent par le biais d’examens comportant des questions à choix multiples afin de pouvoir donner rapidement les résultats aux étudiants. Plusieurs versions de ces examens sont souvent préparées et les questions sont brouillées par ordinateur pour dissuader la tricherie. Lors de la préparation de plusieurs versions d’un examen à choix multiples, l’un des paramètres importants à prendre en considération est que chaque version doit être semblable aux autres pour évaluer équitablement les connaissances des étudiants. Ce projet a pour but d’examiner l’influence possible de l’organisation des réponses correctes sur le choix des réponses des étudiants lors de la préparation de plusieurs versions d’un examen à choix multiples. La question spécifique qui a été posée était de savoir si l’existence d’une série de quatre ou cinq questions à choix multiples consécutives pour lesquelles la même lettre représentait la bonne réponse pouvait avoir une influence préjudiciable sur l’aptitude des étudiants à continuer à choisir la bonne réponse alors qu’ils progressent d’une question à l’autre dans la même série. Les résultats des étudiants qui passent de tels examens ont été comparés aux résultats obtenus quand les étudiants passent des examens dont les questions sont les mêmes mais qui ne comportent pas de telles séries. Ces résultats ont été enrichis par les réponses à une enquête auprès des étudiants pour laquelle les étudiants ont été auto-évalués concernant la question de savoir s’ils avaient remarqué la répartition des réponses correctes parmi les choix multiples quand ils passaient des examens sommatifs, à la fois au départ, quand ils choisissaient leurs réponses, et ensuite quand ils transféraient les lettres correspondant à leurs réponses sur la feuille Scanton pour la correction. Malgré le fait que plus de la moitié des répondants aient indiqué qu’ils ne font pas attention à la structuration des réponses pendant l’examen et qu’une série de quatre ou cinq questions ayant la même lettre pour la bonne réponse pourrait encourager beaucoup d’entre eux à regarder de plus près leur choix de réponse, la conclusion concernant les résultats obtenus par les étudiants suggère que la randomisation des questions à choix multiples, même quand elle aboutit à des séries de réponses correctes identifiées par la même lettre, ne constitue pas une distraction capable d’influencer négativement le rendement des étudiants.


2016 ◽  
Vol 24 (3) ◽  
pp. 341-367 ◽  
Author(s):  
T. van Oordt ◽  
Ingrid Mulder

Purpose Educators in the accounting discipline are faced with the challenge of finding innovative ways to accommodate the flexible learning styles of Millennial students, using “in classroom/contact time” effectively and decreasing transactional distance between students and educators in large classes. In an attempt to address these challenges, this paper aims to describe the implementation of basic e-learning tools (podcasts, vodcasts and voice-over-PowerPoint) as supplementary and substitutional tools in an undergraduate taxation curriculum. The tools were implemented as part of a student-centred approach to the facilitation of learning, embedded in the Blended Learning Theory. The paper reports on students’ use and experience of various basic e-learning tools, as well as the impact of the use of these tools on student performance. Design/methodology/approach An action research methodology was followed, and data were collected by way of a voluntary, descriptive student survey and student class lists. A total of 387 students completed the survey. Findings Students appear to have access to devices and data to use e-learning tools. They perceive these tools as helpful study aids and prefer synchronous, substitutional tools. Use of the tools does not have a significant impact on performance; however, it does appear to have a positive impact on the learning environment and student engagement. Originality/value The results of the study may be of benefit to educators and curriculum designers who are responsible for reviewing and updating the content delivery methods of undergraduate taxation curricula in large classes with diverse student populations. These results add to the limited body of knowledge on the implementation of basic e-learning tools in a South African accounting education setting.


2019 ◽  
Vol 35 (1) ◽  
pp. 15-25
Author(s):  
Kathleen M. Randolph ◽  
Mary Lou Duffy ◽  
Michael P. Brady ◽  
Cynthia L. Wilson ◽  
Mary Catherine Scheeler

Coaching in the school setting typically follows the traditional format of preconference, observe, and postconference, where feedback on teaching performance is shared but often delayed. Professional development (PD) provides teachers with skills to enhance their teaching practice with little to no follow-up or support. The most effective way to produce change in the school setting is to show the connection between PD and student performance, and iCoaching can help to bridge the gap. In this study, four teachers participated in a focused PD session and subsequent iCoaching sessions where the researcher used iPods and Bluetooth earbuds as a bug-in-ear (BIE) device. A coach served as a live, remote observer providing coaching prompts to increase teacher-delivered opportunities to respond (OTR). The results indicated that iCoaching was effective in increasing teacher-delivered OTR and in increasing student responses and academic performance.


2020 ◽  
Vol 7 ◽  
pp. 238212052097718
Author(s):  
Katherine Gruenberg ◽  
Tina Brock ◽  
Joshua Garcia ◽  
Conan MacDougall

Background and Purpose: Therapeutic reasoning—the mental process of making judgments and decisions about treatment—is developed through acquisition of knowledge and application in actual or simulated experiences. Health professions education frequently uses collaborative small group work to practice therapeutic reasoning. This pilot study compared the impact of a web-based/mobile tool for collaborative case work and discussion to usual practice on student perceptions and performance on questions designed to test therapeutic knowledge and reasoning. Methods: In a therapeutics course that includes case-based workshops, student teams of 3 to 4 were randomly assigned to usual workshop preparation (group SOAP sheet) or preparation using the Practice Improvement using Virtual Online Training (PIVOT) platform. PIVOT was also used in the workshop to review the case and student responses. The next week, groups crossed over to the other condition. Students rated favorability with the preparatory and in-workshop experiences and provided comments about the PIVOT platform via a survey. Student performance on examination items related to the 2 workshop topics was compared. Results: One hundred and eleven students (94%) completed post-workshop surveys after both workshops. The majority of students (57%) preferred using the PIVOT platform for workshop collaboration. Favorability ratings for the in-workshop experience did not change significantly from first to second study week, regardless of sequence of exposure. There was no relationship between examination item scores and the workshop platform the students were exposed to for that content ( P = .29). Student responses highlighted the efficiency of working independently before collaborating as a group and the ability to see other students’ thought processes as valuable aspects of PIVOT. Students expressed frustration with the PIVOT user interface and the lack of anonymity when discussing their answers in the workshop. Conclusion: A web-based/mobile platform for student team collaboration on therapeutic reasoning cases discussed in small group settings yielded favorable ratings, examination performance comparable to standard approaches, and was preferred by a majority of students. During the rapid shift to substantial online learning for the COVID-19 pandemic, virtual collaboration tools like PIVOT may help health professions teachers to better support groups working virtually on scaffolded therapeutic reasoning tasks.


2016 ◽  
Vol 8 ◽  
pp. 2
Author(s):  
Stephen Lippi

The testing effect is a phenomenon that predicts increased retention of material when individuals are tested on soon-to-be-recalled information (McDaniel, Anderson, Derbish, & Morrisette, 2007). Although this effect is well documented in numerous studies, no study has looked at the impact that computer-based quizzes or online companion tools in a course can have on test performance. In addition to the use of online programs, it is important to understand whether or not the presentation of different question types can lead to increased or decreased student test performance. Although other pedagogical studies have looked at question order on student performance (Norman, 1954; Balch, 1989), none has looked at whether students exposed to questions in short answer format (testing free recall) before taking a multiple choice test (recognition memory) can lead to increased exam scores. The present study sought to understand how use of an online learning system (MindTap, Cengage) and test format order could affect final test scores. There were 5 exams (consisting of separate short answer and multiple choice sections) given to each set of Physiological Psychology students at George Mason University; each exam being worth 150 points. Results indicate that testing order (whether short-answer sections or multiple choice sections were taken first) impacts student test performance and this effect may be mediated by whether or not an online computer program is required. This research has implications for course organization and selection of test format, which may improve student performance. 


2011 ◽  
Vol 15 (1) ◽  
Author(s):  
Nanette P. Napier ◽  
Sonal Dekhane ◽  
Stella Smith

This paper describes the conversion of an introductory computing course to the blended learning model at a small, public liberal arts college. Blended learning significantly reduces face-to-face instruction by incorporating rich, online learning experiences. To assess the impact of blended learning on students, survey data was collected at the midpoint and end of semester, and student performance on the final exam was compared in traditional and blended learning sections. To capture faculty perspectives on teaching blended learning courses, written reflections and discussions from faculty teaching blended learning sections were analyzed. Results indicate that student performance in the traditional and blended learning sections of the course were comparable and that students reported high levels of interaction with their instructor. Faculty teaching the course share insights on transitioning to the blended learning format.


Sign in / Sign up

Export Citation Format

Share Document