Multiple-Choice Questions with an Option to Comment: Student Attitudes and Use

1986 ◽  
Vol 13 (4) ◽  
pp. 196-199 ◽  
Author(s):  
Anthony F. Nield ◽  
Maxine Gallander Wintre

Introductory Psychology students were graded on four tests using multiple-choice questions with an explicit option to explain their answers (E-option), and were later asked how they would compare this format with short answer, essay, true/false, fill-in-the-blank, and regular multiple-choice. Students rated the E-option and short-answer formats as most preferred, and less frustrating and anxiety producing than other formats (p < .05). Of 416 students, 173 used the E-option, averaging less than one explanation per test over the four tests. During the course, only 30 points were gained and 5 points lost due to E-option use. The E-option seems to be an efficient and humane technique for testing large classes.

1983 ◽  
Vol 52 (1) ◽  
pp. 203-208 ◽  
Author(s):  
John P. Houston

To estimate the self-evidence of basic principles of psychology, 50 UCLA introductory psychology students answered 21 multiple-choice questions each embodying one learning or memory phenomenon. 71% of the items were answered correctly more often than chance. The probability of an item being answered correctly was unrelated to the subjects' familiarity with the names of the phenomena and unrelated to professional psychologists' ratings of the importance of the phenomena. The possibility that we may spend an inordinate amount of item dealing with self-evident principles, because we do not seek outside evaluation of our work, is discussed.


1994 ◽  
Vol 21 (3) ◽  
pp. 157-159 ◽  
Author(s):  
Timothy J. Lawson

In this study, I explore whether a media assignment, similar to that used by Rider (1992), increased introductory psychology students’ ability to apply their knowledge of psychological concepts to examples of real-world events. Students collected examples from the popular media that illustrated either operant-or classical-conditioning concepts. Afterward, they took a quiz that contained factual and applied multiple-choice questions on these concepts. Students who collected examples of operant-conditioning concepts performed better than other students on quiz questions designed to assess their ability to apply their knowledge of operant conditioning. However, students who collected examples of classical-conditioning concepts did not outperform other students on applied classical-conditioning questions. Media assignments may enhance students’ learning and their ability to apply course knowledge to real-world events.


2020 ◽  
Author(s):  
THOMAS PUTHIAPARAMPIL ◽  
Md Mizanur Rahman

Abstract Background Multiple choice questions, used in medical school assessments for decades, have many drawbacks, such as: hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked and answered questions like Very Short Answer Questions (VSAQ) is considered a better alternative with several advantages. Objectives This study aims to substantiate the superiority of VSAQ by actual tests and obtaining feedback from the stakeholders. Methods Conduct multiple true-false, one best answer and VSAQ tests in two batches of medical students, compare their scores and psychometric indexes of the tests and seek opinions from students and academics regarding these assessment methods. Results Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders’ opinions were significantly in favour of VSAQ. Conclusion and recommendation This study concludes that VSAQ is a viable alternative to multiple choice question tests, and it is widely accepted by medical students and academics in the medical faculty.


2020 ◽  
Author(s):  
Zainal Abidin

National Examination and Cambridge Checkpoint are the instrument for evaluating the standard competence ofstudent which organized in Secondary Level. National Examination’s questions based on the National Curriculum ofIndonesia but Cambridge Checkpoint’s questions taken based on Cambridge Curriculum. The aims of this researchis analyzing the type of each question and distribution of each strands in the National Mathematics Examination 2015and Mathematics of Cambridge Checkpoint for Secondary Level 2015. This type of research is a descriptive studywith a qualitative approach. National Mathematics Examination 2015 has one paper only but Mathematics ofCambridge Checkpoint for Secondary Level 2015 has 2 papers for the test. It can be concluded that all question’stype of the National Mathematics Examination for Secondary Level 2015 are multiple choice questions. OnMathematics of Cambridge Checkpoint for Secondary Level 2015, there are various types of questions which consistof 11,43% short-answer question; 68,57% analysis question; 8,57% completing question; and 11,43% match questionfor paper 1, but 22,22% short-answer question; 58,33% analysis question; 11,11% completing question; 2,78% matchquestion; 2,78% multiple choice question; and 2,78% yes/no question for paper 2. Based on strands analyzing result,It can be determined that National Mathematics Examination for Secondary Level 2015 contain of 22,25% number;27,5 algebra; 40% geometry and measurement; 10% statistic and probability. On Mathematics of CambridgeCheckpoint for Secondary Level 2015, It can be explained that 45,72% number; 20% algebra; 17,14% geometry andmeasurement; and 17,14% statistic and probability for paper 1, and 33,33% number; 19,45% algebra; 25% geometryand measurement; and 22,22% statistic and probability for paper 2.


2012 ◽  
Vol 11 (1) ◽  
pp. 47-57 ◽  
Author(s):  
Joyce M. Parker ◽  
Charles W. Anderson ◽  
Merle Heidemann ◽  
John Merrill ◽  
Brett Merritt ◽  
...  

We present a diagnostic question cluster (DQC) that assesses undergraduates' thinking about photosynthesis. This assessment tool is not designed to identify individual misconceptions. Rather, it is focused on students' abilities to apply basic concepts about photosynthesis by reasoning with a coordinated set of practices based on a few scientific principles: conservation of matter, conservation of energy, and the hierarchical nature of biological systems. Data on students' responses to the cluster items and uses of some of the questions in multiple-choice, multiple-true/false, and essay formats are compared. A cross-over study indicates that the multiple-true/false format shows promise as a machine-gradable format that identifies students who have a mixture of accurate and inaccurate ideas. In addition, interviews with students about their choices on three multiple-choice questions reveal the fragility of students' understanding. Collectively, the data show that many undergraduates lack both a basic understanding of the role of photosynthesis in plant metabolism and the ability to reason with scientific principles when learning new content. Implications for instruction are discussed.


2015 ◽  
Vol 39 (4) ◽  
pp. 327-334 ◽  
Author(s):  
Brandon M. Franklin ◽  
Lin Xiang ◽  
Jason A. Collett ◽  
Megan K. Rhoads ◽  
Jeffrey L. Osborn

Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions ( P < 0.05, d = 1.109) but not multiple-choice exam questions ( P = 0.071, d = 0.716) or final course outcome ( P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success.


2011 ◽  
Vol 39 (1) ◽  
pp. 34-37 ◽  
Author(s):  
William R. Balch

On their first class day, introductory psychology students took a 14-question multiple-choice pretest on several principles of memory including primacy, recency, storage, retrieval, counterbalancing, and the free-recall method. I randomly preassigned students to come at one of two different times to the second class, 2 days later, when they either participated in a free-recall demonstration/debriefing or heard a lecture on comparable material. In the third class, five days later, they took a posttest identical to the pretest. On the posttest but not the pretest, students participating in the demonstration/debriefing significantly outperformed those hearing only the lecture.


Sign in / Sign up

Export Citation Format

Share Document