Enhancing Electronic Examinations through Advanced Multiple-Choice Questionnaires

Author(s):  
Dimos Triantis ◽  
Errikos Ventouras

The present chapter deals with the variants of grading schemes that are applied in current Multiple-Choice Questions (MCQs) tests. MCQs are ideally suited for electronic examinations, which, as assessment items, are typically developed in the framework of Learning Content Management Systems (LCMSs) and handled, in the cycle of educational and training activities, by Learning Management Systems (LMS). Special focus is placed in novel grading methodologies, that enable to surpass the limitations and drawbacks of the most commonly used grading schemes for MCQs in electronic examinations. The paired MCQs grading method, according to which a set of pairs of MCQs is composed, is presented. The MCQs in each pair are similar concerning the same topic, but this similarity is not evident for an examinee that does not possess adequate knowledge on the topic addressed in the questions of the pair. The adoption of the paired MCQs grading method might expand the use of electronic examinations, provided that the new method proves its equivalence to traditional methods that might be considered as standard, such as constructed response (CR) tests. Research efforts to that direction are presented.

2012 ◽  
pp. 1645-1664
Author(s):  
Dimos Triantis ◽  
Errikos Ventouras

The present chapter deals with the variants of grading schemes that are applied in current Multiple-Choice Questions (MCQs) tests. MCQs are ideally suited for electronic examinations, which, as assessment items, are typically developed in the framework of Learning Content Management Systems (LCMSs) and handled, in the cycle of educational and training activities, by Learning Management Systems (LMS). Special focus is placed in novel grading methodologies, that enable to surpass the limitations and drawbacks of the most commonly used grading schemes for MCQs in electronic examinations. The paired MCQs grading method, according to which a set of pairs of MCQs is composed, is presented. The MCQs in each pair are similar concerning the same topic, but this similarity is not evident for an examinee that does not possess adequate knowledge on the topic addressed in the questions of the pair. The adoption of the paired MCQs grading method might expand the use of electronic examinations, provided that the new method proves its equivalence to traditional methods that might be considered as standard, such as constructed response (CR) tests. Research efforts to that direction are presented.


Author(s):  
Pilar Gandía Herrero ◽  
Agustín Romero Medina

The quality of academic performance and learning outcomes depend on various factors, both psychological and contextual. The academic context includes the training activities and the type of evaluation or examination, which also influences cognitive and motivational factors, such as learning and study approaches and self-regulation. In our university context, the predominant type of exam is that of multiple-choice questions. The cognitive requirement of these questions may vary. From Bloom's typical taxonomy, it is considered that from lower to higher cognitive demand we have questions about factual, conceptual, application knowledge, etc. Normally, the teacher does not take these classifications into account when preparing this type of exam. We propose here an adaptation model of the multiple choice questions classification according to cognitive requirement (associative memorization, comprehension, application), putting it to the test analyzing an examination of a subject in Psychology Degree and relating the results with measures of learning approaches (ASSIST and R-SPQ-2F questionnaires) and self-regulation in a sample of 87 subjects. The results show differential academic performance according to "cognitive" types of questions and differences in approaches to learning and self-regulation. The convenience of taking into account these factors of cognitive requirement when elaborating multiple choice questions is underlined.


2019 ◽  
Vol 16 (1) ◽  
pp. 59-73 ◽  
Author(s):  
Peter McKenna

PurposeThis paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment.Design/methodology/approachThe paper presents a critical review of existing research on MCQs, then reports on an experimental study where two objective tests (using MCQs and CRQs) were set for an introductory undergraduate course. To maximise completion, tests were kept short; consequently, differences between individuals’ scores across both tests are examined rather than overall averages and pass rates.FindingsMost students who excelled in the MCQ test did not do so in the CRQ test. Students could do well without necessarily understanding the principles being tested.Research limitations/implicationsConclusions are limited by the small number of questions in each test and by delivery of the tests at different times. This meant that statistical average data would be too coarse to use, and that some students took one test but not the other. Conclusions concerning CRQs are limited to disciplines where numerical answers or short and constrained text answers are appropriate.Practical implicationsMCQs, while useful in formative assessment, are best avoided for summative assessments. Where appropriate, CRQs should be used instead.Social implicationsMCQs are commonplace as summative assessments in education and training. Increasing the use of CRQs in place of MCQs should increase the reliability of tests, including those administered in safety-critical areas.Originality/valueWhile others have recommended that MCQs should not be used (Hinchliffe 2014, Srivastavaet al., 2004) because they are vulnerable to guessing, this paper presents an experimental study designed to demonstrate whether this hypothesis is correct.


Author(s):  
Maitri Maulik Jhaveri ◽  
Jyoti Pareek

Online learning repositories are the heart of learning content management systems. This article proposes a model that utilizes the educational content of learning repositories, to create and display multiple learning paths to the students. When a student specifies a topic to study, the model creates the learning paths in form of a tree, with student specified learning concepts as the root node and its co-existing concepts as the child nodes. The model also proposes to automatically extract three co-existing concepts: prerequisites, subsequent topics and features. Use of pattern-based mining and a rule-based classification approach is proposed for the extraction of co-existing concepts. Automatically extracted results are checked for meaningfulness and usefulness against expert generated results. Evaluation of the authors' model on various learning materials shows the appropriate generation of learning paths depicting the co-exiting concepts. The average F1 score obtained for automatic prerequisite extraction is 78%, automatic subsequent topic extraction is 83% and automatic feature extraction is 88%.


10.28945/4479 ◽  
2019 ◽  
Vol 18 ◽  
pp. 153-170
Author(s):  
Yolanda Belo ◽  
Sérgio Moro ◽  
António Martins ◽  
Pedro Ramos ◽  
Joana Martinho Costa ◽  
...  

Aim/Purpose: This paper presents a data mining approach for analyzing responses to advanced declarative programming questions. The goal of this research is to find a model that can explain the results obtained by students when they perform exams with Constructed Response questions and with equivalent Multiple-Choice Questions. Background: The assessment of acquired knowledge is a fundamental role in the teaching-learning process. It helps to identify the factors that can contribute to the teacher in the developing of pedagogical methods and evaluation tools and it also contributes to the self-regulation process of learning. However, better format of questions to assess declarative programming knowledge is still a subject of ongoing debate. While some research advocates the use of constructed responses, others emphasize the potential of multiple-choice questions. Methodology: A sensitivity analysis was applied to extract useful knowledge from the relevance of the characteristics (i.e., the input variables) used for the data mining process to compute the score. Contribution: Such knowledge helps the teachers to decide which format they must consider with respect to the objectives and expected students results. Findings: The results shown a set of factors that influence the discrepancy between answers in both formats. Recommendations for Practitioners: Teachers can make an informed decision about whether to choose multiple-choice questions or constructed-response taking into account the results of this study. Recommendation for Researchers: In this study a block of exams with CR questions is verified to complement the area of learning, returning greater performance in the evaluation of students and improving the teaching-learning process. Impact on Society: The results of this research confirm the findings of several other researchers that the use of ICT and the application of MCQ is an added value in the evaluation process. In most cases the student is more likely to succeed with MCQ, however if the teacher prefers to evaluate with CR other research approaches are needed. Future Research: Future research must include other question formats.


Sign in / Sign up

Export Citation Format

Share Document