scholarly journals Implications of Multiple-Choice Testing in English for Medical Purposes

Author(s):  
Sofija Mićić Kandijaš ◽  
Danka Sinadinović
Author(s):  
Lisa K. Fazio ◽  
Elizabeth J. Marsh ◽  
Henry L. Roediger

2015 ◽  
Vol 21 (2) ◽  
pp. 167-177
Author(s):  
Raymond S. Nickerson ◽  
Susan F. Butler ◽  
Michael T. Carlin

Author(s):  
Rebecca Hamer ◽  
Erik Jan van Rossum

Understanding means different things to different people, influencing what and how students learn and teachers teach. Mainstream understanding of understanding has not progressed beyond the first level of constructivist learning and thinking, ie academic understanding. This study, based on 167 student narratives, presents two hitherto unknown conceptions of understanding matching more complex ways of knowing, understanding-in-relativism and understanding-in-supercomplexity requiring the development of more complex versions of constructive alignment. Students comment that multiple choice testing encourages learning focused on recall and recognition, while academic understanding is not assessed often and more complex forms of understanding are hardly assessed at all in higher education. However, if study success depends on assessments-of-learning that credit them for meaning oriented learning and deeper understanding, students will put in effort to succeed.


1989 ◽  
Vol 69 (3_suppl) ◽  
pp. 1131-1135 ◽  
Author(s):  
Warren A. Weinberg ◽  
Anne McLean ◽  
Robert L. Snider ◽  
Jeanne W. Rintelmann ◽  
Roger A. Brumback

Eight groups of learning disabled children ( N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a “bypass approach” to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.


2019 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Sherry Fukuzawa ◽  
Michael DeBraga

Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options accordingto their relevance to the question. GRM requires discrimination and inference between statements and is acost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This studyexamined critical thinking assessment in GRM versus open-ended and multiple-choice questions composed fromBloom’s taxonomy in an introductory undergraduate course in anthropology and archaeology (N=53students).Critical thinking was operationalized as the ability to assess a question with evidence to support or evaluatearguments (Ennis, 1993). We predicted that students who performed well on multiple-choice from Bloom’staxonomy levels 4-6 and open-ended questions would perform well on GRM involving similar concepts. Highperforming students on GRM were predicted to have higher course grades. The null hypothesis was question typewould not have an effect on critical thinking assessment. In two quizzes, there was weak correlation between GRMand open-ended questions (R2=0.15), however there was strong correlation in the exam (R2=0.56). Correlations wereconsistently higher between GRM and multiple-choice from Bloom’s taxonomy levels 4-6 (R2=0.23,0.31,0.21)versus levels 1-3 (R2=0.13,0.29,0.18). GRM is a viable alternative to multiple-choice in critical thinking assessmentwithout added resources and grading efforts.


Sign in / Sign up

Export Citation Format

Share Document