Achieving Testing Effects in an Authentic College Classroom

2021 ◽  
pp. 009862832110156
Author(s):  
Elizabeth Shobe

Background: Findings from the testing effect literature suggest several ways to achieve testing effects in an authentic classroom, but few consider instructor workload, equity, and resources that determine feasibility and sustainability of testing effect methods in practice. Objective: To determine elements and procedures from the testing effect literature for practical application, devise a method for feasibly and sustainably implementing testing effect methods in practice, and determine if a simple way to incorporate retrieval practice into an existing introduction to psychology course was sufficient to observe testing effects. Method: Quiz scores of Introductory Psychology sections with and without retrieval practice were compared. Sections with retrieval practice also compared the effects of repeated and new questions on quiz performance. Results: Students with retrieval practice performed significantly better on quizzes than those without. Repeated and new retrieval practice were equally superior. Conclusion: Retrieval practices can successfully be implemented, feasibly and sustainably, in an authentic classroom environment. Retrieval practice questions can be related to delayed practice questions, rather than exact repeats, to achieve a testing effect. Teaching Implications: Distributing low stakes multiple-choice questions throughout lectures is effective for increasing test performance. The current method was neither burdensome to workload, content, or resources.

2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Alice Latimier ◽  
Arnaud Riegert ◽  
Hugo Peyre ◽  
Son Thierry Ly ◽  
Roberto Casati ◽  
...  

Abstract Compared with other learning strategies, retrieval practice seems to promote superior long-term retention. This has been found mostly in conditions where learners take tests after being exposed to learning content. However, a pre-testing effect has also been demonstrated, with promising results. This raises the question, for a given amount of time dedicated to retrieval practice, whether learners should be tested before or after an initial exposure to learning content. Our experiment directly compares the benefits of post-testing and pre-testing relative to an extended reading condition, on a retention test 7 days later. We replicated both post-testing (d = 0.74) and pre-testing effects (d = 0.35), with significantly better retention in the former condition. Post-testing also promoted knowledge transfer to previously untested questions, whereas pre-testing did not. Our results thus suggest that it may be more fruitful to test students after than before exposure to learning content.


Author(s):  
Sarah K. (Uma) Tauber ◽  
John Dunlosky ◽  
Katherine A. Rawson

Abstract. The positive effect of delayed retrieval practice on subsequent test performance is robust; by contrast, making delayed judgments of learning (JOLs) encourages covert retrieval but has a minor influence on final test performance. In three experiments, we experimentally established and explored this memory-metamemory paradox. After initial study of paired associates (e.g., husky – ram), participants either were explicitly tested (husky – ?) or made a JOL. In Experiment 1, we adopted the standard JOL method, using a short retention interval, whereas in Experiments 2 and 3, we used a common testing-effect method involving a longer retention interval. Delayed JOLs did not boost test performance, but explicit delayed tests boosted memory after a longer retention interval. As important, participants spent less time to make JOLs than to retrieve responses. These data indicate that differences in the dynamics of retrieval for practice tests versus delayed JOLs are responsible for the paradox.


2018 ◽  
Author(s):  
Steven C. Pan

Attempting recall of information from memory, as occurs when taking a practice test, is one of the most potent training techniques known to learning science. However, does testing yield learning that transfers to different contexts? In the present article, we report the findings of the first comprehensive meta-analytic review into that question. Our review encompassed 192 transfer effect sizes extracted from 122 experiments and 67 published and unpublished articles (N = 10,382) comprising over 40 years of research. A random-effects model revealed that testing can yield transferrable learning as measured relative to a non-testing reexposure control condition (d = 0.40, 95% CI [0.31, 0.50]). That transfer of learning is greatest across test formats, to application and inference questions, to problems involving medical diagnoses, and to mediator and related word cues; it is weakest to rearranged stimulus-response items, to untested materials seen during initial study, and to problems involving worked examples. Moderator analyses further indicated that response congruency and elaborated retrieval practice, as well as initial test performance, strongly influence the likelihood of positive transfer. In two assessments for publication bias (using PET-PEESE and various selection methods), the moderator effect sizes were minimally affected. However, the intercept predictions were substantially reduced, often indicating no positive transfer when none of the aforementioned moderators are present. Overall, our results motivate a three-factor framework for transfer of test-enhanced learning and have practical implications for the effective use of practice testing in educational and other training contexts.


Author(s):  
Ademir Garcia Reberti ◽  
Nayme Hechem Monfredini ◽  
Olavo Franco Ferreira Filho ◽  
Dalton Francisco de Andrade ◽  
Carlos Eduardo Andrade Pinheiro ◽  
...  

Abstract: Progress Test is an objective assessment, consisting of 60 to 150 multiple-choice questions, designed to promote an assessment of the cognitive skills expected at the end of undergraduate school. This test is applied to all students on the same day, so that it is possible to compare the results between grades and analyze the development of knowledge performance throughout the course. This study aimed to carry out a systematic and literary review about Progress Test in medical schools in Brazil and around the world, understanding the benefits of its implementation for the development of learning for the student, the teacher and the institution. The study was carried out from July 2018 to April 2019, which addressed articles published from January 2002 to March 2019. The keywords used were: “Progress Test in Medical Schools” and “Item Response Theory in Medicine” in the PubMed, Scielo, and Lilacs platforms. There was no language limitation in article selection, but the research was carried out in English. A total of 192,026 articles were identified, and after applying advanced search filters, 11 articles were included in the study. The Progress Test (PTMed) has been applied in medical schools, either alone or in groups of partner schools, since the late 1990s. The test results build the students’ performance curves, which allow us to identify weaknesses and strengths of the students in the several areas of knowledge related to the course. The Progress Test is not an exclusive instrument for assessing student performance, but it is also important as an assessment tool for academic management use and thus, it is crucial that institutions take an active role in the preparation and analysis of this assessment data. Assessments designed to test clinical competence in medical students need to be valid and reliable. For the evaluative method to be valid it is necessary that the subject be extensively reviewed and studied, aiming at improvements and adjustments in test performance.


2019 ◽  
Vol 33 (5) ◽  
pp. 759-770
Author(s):  
Jack M. I. Leggett ◽  
Jennifer S. Burt ◽  
Annemaree Carroll

1989 ◽  
Vol 16 (2) ◽  
pp. 77-78 ◽  
Author(s):  
Paul W. Foos

Effects of student-written test questions on student test performance were examined in an Introductory Psychology class. Before each of three tests, randomly assigned students wrote essay questions, multiple-choice questions, or no questions. All tests contained essay and multiple-choice items but no questions written by students. Question writers performed significantly better than nonwriters on the first two tests; the difference on the third test was marginally significant. No differences were found between students who wrote essay and those who wrote multiple-choice questions. Question writing appears to be an effective study technique.


2010 ◽  
Vol 102 (3) ◽  
pp. 729-740 ◽  
Author(s):  
Lisa A. Fast ◽  
James L. Lewis ◽  
Michael J. Bryant ◽  
Kathleen A. Bocian ◽  
Richard A. Cardullo ◽  
...  

Author(s):  
Amitabha Ghosh

This paper highlights some important obstacles in student test performance resulting from different forms of testing procedures in Statics and Dynamics. A group approach dictates the core pedagogy in these classes, which are components of Engineering Sciences Core Curriculum (ESCC) at Rochester Institute of Technology (RIT). Our observations indicate that the difficulties start before engineering sciences due to incomplete understanding of mathematics and physics. While the human aspects of this assessment may not be revealed on tests, results of long hours of counseling sessions of students with faculty and academic advisors have now been imbedded in designing of our program. But in spite of our streamlined processes of improved delivery and testing, many good students demonstrate superior test scores on essay type questions but poor understanding of concepts as revealed from the analysis of Multiple Choice (MC) responses. This lack of performance has been tracked to a narrow focus and a lack of retention of prior concepts in their active memory. The paper discusses these topics using a select set of multiple choice questions administered on Statics and Dynamics examinations and offers remedial actions including proposal of a new course.


Sign in / Sign up

Export Citation Format

Share Document