Immediate Feedback Assessment Technique: Multiple-Choice Test That “Behaves” like an Essay Examination

2002 ◽  
Vol 90 (1) ◽  
pp. 226-226 ◽  
Author(s):  
Michael L. Epstein ◽  
Gary M. Brosvic

A multiple-choice testing system that provides immediate affirming or corrective feedback and permits allocation of partial credit for proximate knowledge is suggested as an alternative to essay examinations.

Author(s):  
Michael Williams ◽  
Eileen Wood ◽  
Fatma Arslantas ◽  
Steve MacNeil

Multiple-choice testing with dichotomous scoring is one of the most common assessment methods utilized in undergraduate education. Determining students’ perceptions toward different types of multiple-choice testing formats is important for effective assessment. The present study compared two alternative multiple-choice testing formats used in a second-year required chemistry course: (1) The Immediate Feedback Assessment Technique (IFAT®) and (2) Personal Point Allocation (PPA). Both testing methods allow for partial credit but only the IFAT® provides immediate feedback on students’ responses. Both survey and interview data indicated that, overall, most students preferred IFAT® to the PPA testing method. These positive ratings were related to potential increase in reward, ease of use, and confidence. IFAT® was also perceived to be less stress producing, and anxiety provoking than PPA. Interview data supported these findings but also indicated individual differences in preference for each of these two methods. Additionally, students’ feedback on strategies used for either testing method and suggestions on how to improve the methods are discussed.


2019 ◽  
Vol 2 (1) ◽  
pp. 80-92
Author(s):  
Dane Christian Joseph

Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic-driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of the correct answer throughout the entire answer key. Doing this theoretically limits the number of correct guesses that test-takers can make and thus reduces the amount of construct-irrelevant variance in test score interpretations. This study empirically tested the strategy to randomize the answer-key. Specifically, a factorial ANOVA was conducted to examine differences in General Biology classroom multiple-choice test scores by the interaction of method for varying the correct answer’s position and student-ability. Although no statistically significant differences were found, the paper argues that the guideline is nevertheless ethically substantiated.


2001 ◽  
Vol 88 (3) ◽  
pp. 889-894 ◽  
Author(s):  
Michael L. Epstein ◽  
Beth B. Epstein ◽  
Gary M. Brosvic

Performance on two multiple-choice testing procedures was examined during unit tests and a final examination. The Immediate Feedback Assessment Technique provided immediate response feedback in an answer-until-correct style of responding. The testing format which served as a point of comparison was the Scantron form. One format was completed by students in introductory psychology courses during unit tests whereas all students used the Scantron form on the final examination. Students tested with Immediate Feedback forms on the unit tests correctly answered more of the final examination questions which were repeated from earlier unit tests than did students tested with Scantron forms. Also, students tested with Immediate Feedback forms correctly answered more final examination questions previously answered incorrectly on the unit tests than did students tested previously with Scantron forms.


2014 ◽  
Vol 4 (1) ◽  
Author(s):  
Antonios Tsopanoglou ◽  
George S. Ypsilandis ◽  
Anna Mouti

AbstractMultiple-choice (MC) tests are frequently used to measure language competence because they are quick, economical and straightforward to score. While degrees of correctness have been investigated for partially correct responses in combined-response MC tests, degrees of incorrectness in distractors and the role they play in determining the test-taker's final score remain comparatively unexplored. This pilot study examines degrees of incorrectness in MC test items and their potential impact on the overall


1968 ◽  
Vol 11 (4) ◽  
pp. 825-832 ◽  
Author(s):  
Marilyn M. Corlew

Two experiments investigated the information conveyed by intonation from speaker to listener. A multiple-choice test was devised to test the ability of 48 adults to recognize and label intonation when it was separated from all other meaning. Nine intonation contours whose labels were most agreed upon by adults were each matched with two English sentences (one with appropriate and one with inappropriate intonation and semantic content) to make a matching-test for children. The matching-test was tape-recorded and given to children in the first, third, and fifth grades (32 subjects in each grade). The first-grade children matched the intonations with significantly greater agreement than chance; but they agreed upon significantly fewer sentences than either the third or fifth graders. Some intonation contours were matched with significantly greater frequency than others. The performance of the girls was better than that of the boys on an impatient question and a simple command which indicates that there was a significant interaction between sex and intonation.


1967 ◽  
Vol 10 (3) ◽  
pp. 565-569 ◽  
Author(s):  
Kenneth G. Donnelly ◽  
William J. A. Marshall

Sign in / Sign up

Export Citation Format

Share Document