scholarly journals The Development of On-line Tests Based on Multiple Choice Questions

2003 ◽  
pp. 121-143 ◽  
Author(s):  
Geoffrey G. Roy ◽  
Jocelyn Armarego

This chapter is concerned with the use of Web-based technologies to deliver and manage on-line multiple choice tests for university teaching. The data defining the tests and the results of each student’s attempt is maintained in a server-side database. The test is delivered in a Web page that can be displayed by a standard Web browser. Students are thus able to access the required tests from the same location that they can access their course content, in large part, on the Web. Multiple choices tests have been shown to be an effective way of both supporting the learning experience and providing an objective assessment process. The basic elements of the required technology are described including some implementation issues that are necessary to achieve a viable and robust system. Some of the key issues include the use of server-side tools for database access and client-side components to deliver and manage the user interface.

‘Multiple Choice Questions in Musculoskeletal, Sport & Exercise Medicine’ is a compilation of 400 multiple choice questions (MCQs), where the format is that of single best answer from a choice of five options. The book closely follows the curriculum of the ‘Membership of Faculty of Sport & Exercise Medicine’ (MFSEM) examination, with some questions being clinically oriented and others being knowledge based. This book is not intended to be a substitute for extensive clinical reading but instead to complement the learning process. Questions in this book have been carefully curated by 92 reputable subject matter experts across ten countries and are intended to provide a structured learning experience. The book is comprised of 46 chapters, where the first 23 ask questions and the next 23 provide answers. The answer to each question has a short explanation with a reference, which is intended to stimulate discussion, research and further learning. There is a total of 33 high quality images (MRI scans, plain radiographs, ECGs, ultrasound scans and photographs), 18 tables and 5 diagrams in the book.


Author(s):  
Le Thai Hung ◽  
Nguyen Thi Quynh Giang ◽  
Tang Thi Thuy ◽  
Tran Lan Anh ◽  
Nguyen Tien Dung ◽  
...  

Computerized Adaptive Testing - CAT is a form of assessment test which requires fewer test questions to arrive at precise measurements of examinees' ability. One of the core technical components in building a CAT is mathematical algorithms which estimate examinee's ability and select the most appropriate test questions for those estimates. Those mathematical algorithms serve as a locomotive in operating the system of adaptive multiple-choice questions on computers.  Our research aims to develop essential mathematical algorithms to a computerised system of adaptive multiple-choice tests. We also build a question bank of 500 multiple-choice questions standardised by IRT theory with the difficulty level follows the normal distribution satisfying Kolmogorov-Smirnov test, to measure the mathematical ability of students in grade 10. The initial outcome of our experiment of the question bank shows: the question bank satisfies the requirements from a psychometric model and the constructed mathematical algorithms meets the criteria to apply in computerised adaptive testing.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


Author(s):  
Lauren Maloney ◽  
James Dilger ◽  
Paul Werfel ◽  
Linda Cimino

Purpose: As Emergency Medical Technician educators develop curricula to meet new national educational standards, effective teaching strategies validated for course content and unique student demographics are warranted. Three methods for answering multiple choice questions presented during lectures were compared: a) Audience Response System (ARS, clickers), b) hand-raising-with-eyes-closed (no-cost option), and c) passive response (no-cost option). The purpose was to determine if using the ARS resulted in improved exam scores. Method: 113 Emergency Medical Technician (EMT) students participated in this cross-over, block randomized, controlled trial, which was incorporated into their Cardiac Emergencies and Pulmonary Emergencies course lectures. Students took pretests, immediate post-tests, and delayed post-tests composed of multiple choice questions that targeted either lower or higher order thinking. Results: For both lectures, there were significant improvements on all immediate post-test scores compared to all pretest scores (p Conclusions: In this cohort, incorporation of no-cost question-driven teaching strategies into lectures was as effective as an ARS at encouraging significant, immediate and sustained improvements in answering multiple choice questions.


1979 ◽  
Vol 1 (2) ◽  
pp. 24-33 ◽  
Author(s):  
James R. McMillan

Most educators agree that classroom evaluation practices need improvement. One way to improve testing is to use high-quality objective multiple-choice exams. Almost any understanding or ability which can be tested by another test form can also be tested by means of multiple-choice items. Based on a survey of 173 respondents, it appears that marketing teachers are disenchanted with multiple-choice questions and use them sparingly. Further, their limited use is largely in the introductory marketing course even though there are emerging pressures for universities to take a closer look at the quality of classroom evaluation at all levels.


2008 ◽  
Vol 90 (2) ◽  
pp. 120-122
Author(s):  
J John ◽  
JH Kuiper ◽  
CP Kelly

INTRODUCTION Surgical skills courses are an important part of learning during surgical training. The assessments at these courses tend to be subjective and anecdotal. Objective assessment using multiple choice questions (MCQs) quantifies the learning experience for both the organisers and the participants. MATERIALS AND METHODS Participants of the open shoulder surgical skills course conducted at The Royal College of Surgeons of England in 2005 and 2006 underwent assessment using MCQs prior to and after the course. RESULTS The participants were grouped as non-consultants (14) and consultant orthopaedic surgeons (8). All participants improved after attending the course. The average improvement was 17% (range, 4–43%). We compared the two groups while adjusting for the association between pre-course score and score gain. We found a strong correlation between pre-course score and score gain (r = 0.734; P = 0.001). Adjusted for pre-course score, we found that the score gain (learning) for the non-consultants was slightly larger than for the consultants, but this did not reach statistical significance (P = 0.247). CONCLUSIONS All participants had a positive learning experience which did not have a significant correlation to the grade of surgeon.


1978 ◽  
Vol 5 ◽  
pp. 59-74
Author(s):  
H.W.M. van den Nieuwenhof

For ten years now multiple choice tests have been used in the Dutch school system to measure listening comprehension of English, French and German. The tests were developed in a research program, conducted at the Insitute of Applied Linguistics by Dr. ? Groot. Now that the tests have been in use for 10 years we are confronted with the following questions. Are the tests still reliable, as they were 10 years ago? In how far does the multiple choice technique give a true picture of the listening comprehension of students? Does the multiple choice technique help studens to cope with language material that they could not have coped with otherwise, in other words, to what extent does the language material used in tests suggest a higher level of listening comprehension than the students actually have? An experiment has been carried out at C.I.T.O. (Central Institute for Test Development). Students had to answer both multiple choice questions and open ended questions concerning the same language material. The results suggested that the language material used in tests was verydifficult for students to handle in an open ended question test form. The results also suggested that various levels of difficulty of the langua material used within a single test was reflected in the open ended test results, but not in the results of the multiple choice tests. The multiple choice technique seems to obscure the relative difficulty of the various test components. It has been found that an appropriate use of the multiple choice technique can cover only a restricted range of language material. The measuring technique must not restrict the choice of language material, and thereby influence content validity. A possible solution to the problem would be the development of a new kind of test. In this test a great variety of language material should be tested with a great variety of testing techniques: a great variety of language material in order to improve the content validity of the test, a great variety of testing techniques in order to reduce, as much as possi ble, the disadvantages of every single testing technique by itself.


2012 ◽  
Vol 3 (2) ◽  
pp. 65-74
Author(s):  
Isabel Novo-Corti ◽  
Laura Varela-Candamio ◽  
María Ramil-Díaz

Along with experience teaching microeconomics, the authors have found that the accuracy of the concepts used and graphic tools, as well as the broad mathematical analytical foundation used by this discipline, draw a stage where some students may feel lost, particularly when they face multiple choice questions. That’s why it’s not unusual to find some people that are not able to get good marks, even when they have a quite good level on microeconomics knowledge. This work deals with an on-line training, based on Moodle platform, to provide students some tools to achieve the best results on their qualifications. The authors used a base data with multiple choice questions on microeconomics to train on solving this kind of questions. The authors presented three different types of questions, based on graphics, on mathematics and on understanding and internalization of microeconomic concepts. Results have shown that this is a practical way to get success in examinations. At the same time, some interesting differences were found on behaviour paths for women, who seem need less time to review the lessons, and men.


1999 ◽  
Vol 276 (6) ◽  
pp. S93 ◽  
Author(s):  
A A Rovick ◽  
J A Michael ◽  
H I Modell ◽  
D S Bruce ◽  
B Horwitz ◽  
...  

Teachers establish prerequisites that students must meet before they are permitted to enter their courses. It is expected that having these prerequisites will provide students with the knowledge and skills they will need to successfully learn the course content. Also, the material that the students are expected to have previously learned need not be included in a course. We wanted to determine how accurate instructors' understanding of their students background knowledge actually was. To do this, we wrote a set of multiple-choice questions that could be used to test students' knowledge of concepts deemed to be essential for learning respiratory physiology. Instructors then selected 10 of these questions to be used as a prerequisite knowledge test. The instructors also predicted the performance they expected from the students on each of the questions they had selected. The resulting tests were administered in the first week of each of seven courses. The results of this study demonstrate that instructors are poor judges of what beginning students know. Instructors tended to both underestimate and overestimate students' knowledge by large margins on individual questions. Although on the average they tended to underestimate students' factual knowledge, they overestimated the students' abilities to apply this knowledge. Hence, the validity of decisions that instructors make, predicated on the basis of their students having the prerequisite knowledge that they expect, is open to question.


Sign in / Sign up

Export Citation Format

Share Document