What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy

2021 ◽  
pp. 1-12
Author(s):  
Seetha U. Monrad ◽  
Nikki L. Bibler Zaidi ◽  
Karri L. Grob ◽  
Joshua B. Kurtz ◽  
Andrew W. Tai ◽  
...  
Author(s):  
J. K. Stringer ◽  
Sally A. Santen ◽  
Eun Lee ◽  
Meagan Rawls ◽  
Jean Bailey ◽  
...  

Abstract Background Analytic thinking skills are important to the development of physicians. Therefore, educators and licensing boards utilize multiple-choice questions (MCQs) to assess these knowledge and skills. MCQs are written under two assumptions: that they can be written as higher or lower order according to Bloom’s taxonomy, and students will perceive questions to be the same taxonomical level as intended. This study seeks to understand the students’ approach to questions by analyzing differences in students’ perception of the Bloom’s level of MCQs in relation to their knowledge and confidence. Methods A total of 137 students responded to practice endocrine MCQs. Participants indicated the answer to the question, their interpretation of it as higher or lower order, and the degree of confidence in their response to the question. Results Although there was no significant association between students’ average performance on the content and their question classification (higher or lower), individual students who were less confident in their answer were more than five times as likely (OR = 5.49) to identify a question as higher order than their more confident peers. Students who responded incorrectly to the MCQ were 4 times as likely to identify a question as higher order than their peers who responded correctly. Conclusions The results suggest that higher performing, more confident students rely on identifying patterns (even if the question was intended to be higher order). In contrast, less confident students engage in higher-order, analytic thinking even if the question is intended to be lower order. Better understanding of the processes through which students interpret MCQs will help us to better understand the development of clinical reasoning skills.


Author(s):  
J. Robert Loftis ◽  

Multiple-choice questions have an undeserved reputation for only being able to test student recall of basic facts. In fact, well-crafted mechanically gradable questions can measure very sophisticated cognitive skills, including those engaged at the highest level of Benjamin Bloom’s taxonomy of outcomes. In this article, I argue that multiple-choice questions should be a part of the diversified assessment portfolio for most philosophy courses. I present three arguments broadly related to fairness. First, multiple-choice questions allow one to consolidate subjective decision making in a way that makes it easier to manage. Second, multiple-choice questions contribute to the diversity of an evaluation portfolio by balancing out problems with writing-based assessments. Third, by increasing the diversity of evaluations, multiple-choice questions increase the inclusiveness of the course. In the course of this argument, I provide examples of multiple-choice questions that measure sophisticated learning and advice for how to write good multiple-choice questions.


Author(s):  
Chris Adams

I report the implementation of an activity in which students are asked to write multiple-choice questions (MCQs) on the subject of ‘orbitals’ in order to consolidate their learning on the subject. This was facilitated using the online system PeerWise, which allows students to upload MCQs that they have written and to then answer those authored by their peers. The process of writing questions accesses the upper levels of Bloom’s taxonomy, and the discussions incorporated within the activity allow for socially constructed learning as part of the pedagogy of constructive evaluation.


2015 ◽  
Vol 8 (2) ◽  
pp. 125-144 ◽  
Author(s):  
Godson Ayertei Tetteh ◽  
Frederick Asafo-Adjei Sarpong

Purpose – The purpose of this paper is to explore the influence of constructivism on assessment approach, where the type of question (true or false, multiple-choice, calculation or essay) is used productively. Although the student’s approach to learning and the teacher’s approach to teaching are concepts that have been widely researched, few studies have explored how the type of assessment (true or false, multiple-choice, calculation or essay questions) and stress would manifest themselves or influence the students’ learning outcome to fulfill Bloom’s taxonomy. Multiple-choice questions have been used for efficient assessment; however, this method has been criticized for encouraging surface learning. And also some students complain of excelling in essay questions and failing in multiple-choice questions. A concern has arisen that changes may be necessary in the type of assessment that is perceived to fulfill Bloom’s taxonomy. Design/methodology/approach – Students’ learning outcomes were measured using true or false, multiple-choice, calculations or essay questions to fulfill Bloom’s taxonomy and the students’ reaction to the test questionnaire. To assess the influence of the type of assessment and the stress level factors of interest, MANOVA was used to identify whether any differences exist and to assess the extent to which these differences are significantly different, both individually and collectively. Second, to assess if the feedback information given to respondents after the mid-semester assessment was effective, the one-way ANOVA procedure was used to test the equality of means and the differences in means of the mid-semester assessment scores and the final assessment scores. Findings – Results revealed that the type of questions (true or false, multiple-choice, calculations or essay) will not significantly affect the learning outcome for each subgroup. The ANOVA results, comparing the mid-semester and final assessments, indicated that there is sufficient evidence means are not equal. Thus, the feedback given to respondents after the mid-semester assessment had a positive impact on the final assessment to actively improve student learning. Research limitations/implications – This study is restricted to students in a particular university in Ghana, and may not necessarily be applicable universally. Practical implications – The practical implications of this research is that assessments for learning, and the importance of assessment impact not only on students, but also on teachers and the literature. Originality/value – This study contributes to the literature by examining how the combination of the type of assessment (true or false, multiple-choice, calculation or essay) and stress contributes to the learning outcome.


2011 ◽  
Vol 2 (2) ◽  
Author(s):  
Amy M. Tiemeier ◽  
Zachary A. Stacy ◽  
John M. Burke

Objective: To evaluate the results of a prospectively developed plan for using multiple choice questions (MCQs) developed at defined Bloom's levels to assess student performance across a Therapeutics sequence. Methods: Faculty were prospectively instructed to prepare a specific number of MCQs for exams in a Therapeutics sequence. Questions were distributed into one of three cognitive levels based on a modified Bloom's taxonomy, including recall, application, and analysis. Student performance on MCQs was compared between and within each Bloom's level throughout the Therapeutics sequence. In addition, correlations between MCQ performance and case performance were assessed. Results:A total of 168 pharmacy students were prospectively followed in a Therapeutics sequence over two years. The overall average MCQ score on 10 exams was 68.8%. A significant difference in student performance was observed between recall, application, and analysis domain averages (73.1%, 70.2% and 60.1%; p Conclusions: As students progress through the curriculum, faculty may need to find ways to promote recall knowledge for more advanced topics while continuing to develop their ability to apply and analyze information. Exams with well-designed MCQs that prospectively target various cognitive levels can facilitate assessment of student performance.   Type: Original Research


2012 ◽  
Vol 76 (6) ◽  
pp. 114 ◽  
Author(s):  
Myo-Kyoung Kim ◽  
Rajul A. Patel ◽  
James A. Uchizono ◽  
Lynn Beck

2019 ◽  
Author(s):  
Febby Gunawan Siswanto

High level of cognitive, as an important part of medical education, can be trained by appropriate higher order thinking exams. A taxonomy called Bloom’s Taxonomy fits to be the standard of creating test for medical student and analyzing cognitive level of medical exams. The purpose of study is to analyze cognitive levels of physiology pretests of first year medicine in Sebelas Maret University 2018/2019. This is a qualitative descriptive research. The data are gained from documentation of six exams and analyzed by Revised Bloom’s Taxonomy. There were 106 multiple choice questions in the pretest (51%=remember; 15%=understand; 12%=apply; 17%=analyze; 3%=evaluate; 2%=create). A half of the questions showed the lowest levels of lower order thinking skills. On the other hand, the highest level of higher order thinking skills’ questions came from package D and E. Therefore, reconstruction of physiology pretest of Medicine Sebelas Maret University 2018/2019 is needed for creating questions that equal to cognitive level’s demand of medical students.


2018 ◽  
Vol 6 (1) ◽  
Author(s):  
Yulan M. Puluhulawa

The main objective of this research was to find out the cognitive level of the 2010 Senior High School National Examination based on Bloom’s Taxonomy. The method used in this research is descriptive qualitative method. The result of the research showed that multiple choice test items that evaluate these higher levels of reasoning should present a case or situation to the student, and then require them to apply theories, processes, or other types of analysis learned in class to arrive at the answer to the multiple choice items based on the case information. In this test, the researcher viewed that the test was well constructed. In listening section, the native speaker is not as hard as others. Sometimes, the students are difficult to catch what speaker said. But, in this test, the conversation in the cassettes is not difficult to be grasped. After analyzing the test, the writer found that almost items are in Understanding Level. Except, the items number 4, 5 and 6. In this item, the students are required to classify or to match the pictures and the dialogue. In cognitive domain proposed by Blooms, matching pictures is in the first level.


Sign in / Sign up

Export Citation Format

Share Document