Experience Versus Potential of Multiple-Choice Tests in Marketing Courses

1979 ◽  
Vol 1 (2) ◽  
pp. 24-33 ◽  
Author(s):  
James R. McMillan

Most educators agree that classroom evaluation practices need improvement. One way to improve testing is to use high-quality objective multiple-choice exams. Almost any understanding or ability which can be tested by another test form can also be tested by means of multiple-choice items. Based on a survey of 173 respondents, it appears that marketing teachers are disenchanted with multiple-choice questions and use them sparingly. Further, their limited use is largely in the introductory marketing course even though there are emerging pressures for universities to take a closer look at the quality of classroom evaluation at all levels.

Author(s):  
David DiBattista ◽  
Laura Kurzawa

Because multiple-choice testing is so widespread in higher education, we assessed the quality of items used on classroom tests by carrying out a statistical item analysis. We examined undergraduates’ responses to 1198 multiple-choice items on sixteen classroom tests in various disciplines. The mean item discrimination coefficient was +0.25, with more than 30% of items having unsatisfactory coefficients less than +0.20. Of the 3819 distractors, 45% were flawed either because less than 5% of examinees selected them or because their selection was positively rather than negatively correlated with test scores. In three tests, more than 40% of the items had an unsatisfactory discrimination coefficient, and in six tests, more than half of the distractors were flawed. Discriminatory power suffered dramatically when the selection of one or more distractors was positively correlated with test scores, but it was only minimally affected by the presence of distractors that were selected by less than 5% of examinees. Our findings indicate that there is considerable room for improvement in the quality of many multiple-choice tests. We suggest that instructors consider improving the quality of their multiple-choice tests by conducting an item analysis and by modifying distractors that impair the discriminatory power of items. Étant donné que les examens à choix multiple sont tellement généralisés dans l’enseignement supérieur, nous avons effectué une analyse statistique des items utilisés dans les examens en classe afin d’en évaluer la qualité. Nous avons analysé les réponses des étudiants de premier cycle à 1198 questions à choix multiples dans 16 examens effectués en classe dans diverses disciplines. Le coefficient moyen de discrimination de l’item était +0.25. Plus de 30 % des items avaient des coefficients insatisfaisants inférieurs à + 0.20. Sur les 3819 distracteurs, 45 % étaient imparfaits parce que moins de 5 % des étudiants les ont choisis ou à cause d’une corrélation négative plutôt que positive avec les résultats des examens. Dans trois examens, le coefficient de discrimination de plus de 40 % des items était insatisfaisant et dans six examens, plus de la moitié des distracteurs était imparfaits. Le pouvoir de discrimination était considérablement affecté en cas de corrélation positive entre un distracteur ou plus et les résultatsde l’examen, mais la présence de distracteurs choisis par moins de 5 % des étudiants avait une influence minime sur ce pouvoir. Nos résultats indiquent que les examens à choix multiple peuvent être considérablement améliorés. Nous suggérons que les enseignants procèdent à une analyse des items et modifient les distracteurs qui compromettent le pouvoir de discrimination des items.


1978 ◽  
Vol 5 ◽  
pp. 59-74
Author(s):  
H.W.M. van den Nieuwenhof

For ten years now multiple choice tests have been used in the Dutch school system to measure listening comprehension of English, French and German. The tests were developed in a research program, conducted at the Insitute of Applied Linguistics by Dr. ? Groot. Now that the tests have been in use for 10 years we are confronted with the following questions. Are the tests still reliable, as they were 10 years ago? In how far does the multiple choice technique give a true picture of the listening comprehension of students? Does the multiple choice technique help studens to cope with language material that they could not have coped with otherwise, in other words, to what extent does the language material used in tests suggest a higher level of listening comprehension than the students actually have? An experiment has been carried out at C.I.T.O. (Central Institute for Test Development). Students had to answer both multiple choice questions and open ended questions concerning the same language material. The results suggested that the language material used in tests was verydifficult for students to handle in an open ended question test form. The results also suggested that various levels of difficulty of the langua material used within a single test was reflected in the open ended test results, but not in the results of the multiple choice tests. The multiple choice technique seems to obscure the relative difficulty of the various test components. It has been found that an appropriate use of the multiple choice technique can cover only a restricted range of language material. The measuring technique must not restrict the choice of language material, and thereby influence content validity. A possible solution to the problem would be the development of a new kind of test. In this test a great variety of language material should be tested with a great variety of testing techniques: a great variety of language material in order to improve the content validity of the test, a great variety of testing techniques in order to reduce, as much as possi ble, the disadvantages of every single testing technique by itself.


2020 ◽  
Vol 3 (2) ◽  
pp. 35
Author(s):  
Ni Wayan Vina Krisna Yanti ◽  
Anak Agung Gede Yudha Paramartha ◽  
Luh Gede Eka Wahyuni

This study arose in regards to the importance of constructing high quality multiple-choice tests (MCTs) that follow the norms in making a good MCT. The norms are considered important as they can make the tests relevant to the learning objectives and ease the test-takers in taking the test. This study which aimed to investigate the quality of teacher-made MCTs used as summative assessment for English subject at SMP Laboratorium Universitas Pendidikan Ganesha. 3 teacher-made MCTs with 40 items for each test were taken as the samples that represent each grade. The data were collected through document study by comparing each of the teacher-made MCTs item with the norms in making a good MCT that were then clarified through interview. The comparison is then classified in order to determine the quality. The results show that the quality of the teacher-made MCTs are very good with 106 items (88%) qualified as very good and 14 items (12%) that qualified as good. However, there are some norms that need more attention as they are rarely fulfilled.


2019 ◽  
Vol 14 (26) ◽  
pp. 51-65
Author(s):  
Lotte Dyhrberg O'Neill ◽  
Sara Mathilde Radl Mortensen ◽  
Cita Nørgård ◽  
Anne Lindebo Holm Øvrehus ◽  
Ulla Glenert Friis

Construction errors in multiple-choice items are quite prevalent and constitute threats to test validity of multiple-choice tests. Currently very little research on the usefulness of systematic item screening by local review committees before test administration seem to exist. The aim of this study was therefore to examine validity and feasibility aspects of review committee screening for item flaws. We examined the reliability of item reviewers’ independent judgments of the presence/absence of item flaws with a generalizability study design and found only moderate reliability using five reviewers. Statistical analyses of actual exam scores could be a more efficient way of identifying flaws and improving average item discrimination of tests in local contexts. The question of validity of human judgments of item flaws is important - not just for sufficiently sound quality assurance procedures of tests in local test contexts - but also for the global research on item flaws.


1971 ◽  
Vol 29 (3_suppl) ◽  
pp. 1229-1230
Author(s):  
Carrie Wherry Waters ◽  
L. K. Waters

Reactions of examinees to 2 scoring instructions were evaluated for 2-, 3-, and 5-alternative multiple-choice items. Examinees were more favorable toward the “reward for omitted items” than the “penalty for wrongs” instructions across all numbers of item alternatives.


2019 ◽  
Vol 94 (5) ◽  
pp. 740
Author(s):  
Valérie Dory ◽  
Kate Allan ◽  
Leora Birnbaum ◽  
Stuart Lubarsky ◽  
Joyce Pickering ◽  
...  

2019 ◽  
pp. 1
Author(s):  
Valérie Dory ◽  
Kate Allan ◽  
Leora Birnbaum ◽  
Stuart Lubarsky ◽  
Joyce Pickering ◽  
...  

2018 ◽  
Vol 8 (9) ◽  
pp. 1152
Author(s):  
Qingsong Gu ◽  
Michael W. Schwartz

In taking traditional multiple-choice tests, random guessing is unavoidable yet nonnegligible. To uncover the “unfairness” caused by random guessing, this paper designed a Microsoft Excel template with the use of relevant functions to automatically quantify the probability of answering correctly at random, eventually figuring out the least scores a testee should get to pass a traditional multiple-choice test with different probabilities of answering correctly at random and the “luckiness” for passing it. This paper concludes that, although random guessing is nonnegligible, it is unnecessary to remove traditional multiple-choice items from all testing activities, because it can be controlled through changing the passing score and the number of options or reducing its percentage in a test.


2020 ◽  
Vol 1 (1) ◽  
pp. 30-49
Author(s):  
Darryl J Chamberlain ◽  
Russell Jeter

The goal of this paper is to propose a new method to generate multiple-choice items that can make creating quality assessments faster and more efficient, solving a practical issue that many instructors face. There are currently no systematic, efficient methods available to generate quality distractors (plausible but incorrect options), which are necessary for multiple-choice assessments that accurately assess students’ knowledge. We propose two methods to use technology to generate quality multiple-choice assessments: (1) manipulating the mathematical problem to emulate common student misconceptions or errors and (2) disguising options to protect the integrity of multiple-choice tests. By linking options to common student misconceptions and errors, instructors can use assessments as personalized diagnostic tools that can target and modify underlying misconceptions. Moreover, using technology to generate these quality distractors would allow for assessments to be developed efficiently, in terms of both time and resources. The method to disguise the options generated would have the added benefit of preventing students from working backwards from options to solution and thus would protect the integrity of the assessment.


Seminar.net ◽  
2010 ◽  
Vol 6 (3) ◽  
Author(s):  
Bjørn Klefstad ◽  
Geir Maribu ◽  
Svend Andreas Horgen ◽  
Thorleif Hjeltnes

The use of digital multiple-choice tests in formative and summative assessment has many advantages. Such tests are effective, objective, and flexible. However, it is still challenging to create tests that are valid and reliable. Bloom’s taxonomy is used as a framework for assessment in higher education and therefore has a great deal of influence on how the learning outcomes are formulated. Using digital tools to create tests has been common for some time, yet the tests are still mostly answered on paper. Our hypothesis has two parts: first, it is possible to create summative tests that match different levels and learning outcomes within a chosen subject; second, a test tool of some kind is necessary to enable teachers and examiners to take a more proactive attitude to(wards) different levels and learning outcomes in a subject and so ensure the quality of digital test designing. Based on an analysis of several digital tests we examine to what degree learning outcomes and levels are reflected in the different test questions. We also suggest functionality for a future test tool to support an improved design process.


Sign in / Sign up

Export Citation Format

Share Document