E- Assessment with Multiple-Choice Questions: A 5 Year Study of Students’ Opinions and Experience

10.28945/4491 ◽  
2020 ◽  
Vol 19 ◽  
pp. 001-029
Author(s):  
Rosalina Babo ◽  
Lurdes V. Babo ◽  
Jarkko T Suhonen ◽  
Markku Tukiainen

Aim/Purpose: The aim of this study is to understand student’s opinions and perceptions about e-assessment when the assessment process was changed from the traditional computer assisted method to a multiple-choice Moodle based method. Background: In order to implement continuous assessment to a large number of students, several shifts are necessary, which implies as many different tests as the number of shifts required. Consequently, it is difficult to ensure homogeneity through the different tests and a huge amount of grading time is needed. These problems related to the traditional assessment based on computer assisted tests, lead to a re-design of the assessment resulting in the use of multiple-choice Moodle tests. Methodology: A longitudinal, concurrent, mixed method study was implemented over a five-year period. A survey was developed and carried out by 815 undergraduate students who experienced the electronic multiple-choice questions (eMCQ) assessment in the courses of the IS department. Qualitative analyses included open-ended survey responses and interviews with repeating students in the first year. Contribution: This study provides a reflection tool on how to incorporate frequent moments of assessment in courses with a high number of students without overloading teachers with a huge workload. The research analysed the efficiency of assessing non-theoretical topics using eMCQ, while ensuring the homogeneity of assessment tests, which needs to be complemented with other assessment methods in order to assure that students develop and acquire the expected skills and competencies. Findings: The students involved in the study appreciate the online multiple-choice quiz assessment method and perceive it as fair but have a contradictory opinion regarding the preference of the assessment method, throughout the years. These changes in perception may be related to the improvement of the question bank and categorisation of questions according to difficulty level, which lead to the nullification of the ‘luck factor’. Other major findings are that although the online multiple-choice quizzes are used with success in the assessment of theoretical topics, the same is not in evidence regarding practical topics. Therefore, this assessment needs to be complemented with other methods in order to achieve the expected learning outcomes. Recommendations for Practitioners: In order to be able to evaluate the same expected learning outcomes in practical topics, particularly in technology and information systems subjects, the evaluator should complement the online multiple-choice quiz assessment with other approaches, such as a PBL method, homework assignments, and/or other tasks performed during the semester. Recommendation for Researchers: This study explores e-assessment with online multiple-choice quizzes in higher education. It provides a survey that can be applied in other institutions that are also using online multiple-choice quizzes to assess non-theorical topics. In order to better understand the students’ opinions on the development of skills and competencies with online multiple-choice quizzes and on the other hand with classical computer assisted assessment, it would be necessary to add questions concerning these aspects. It would then be interesting to compare the findings of this study with the results from other institutions. Impact on Society: The increasing number of students in higher education has led to a raised use of e-assessment activities, since it can provide a fast and efficient manner to assess a high number of students. Therefore, this research provides meaningful insight of the stakeholders’ perceptions of online multiple-choice quizzes about practical topics. Future Research: An interesting study, in the future, would be to obtain the opinions of a particular set of students on two tests, one of the tests using online multiple-choice quizzes and the other through a classical computer assisted assessment method. A natural extension of the present study is a comparative analysis regarding the grades obtained by students who performed one or another type of assessment (online multiple-choice quizzes vs. classical computer assisted assessment).

Author(s):  
Dania Sabbahi

Introduction: Having a well-constructed blueprint, also known as tables of specifications or test specifications, for assessments makes them defensible indicators for students’ gain of the course learning outcomes. Furthermore, it ensures content validity of a test which is a requirement for any evaluation that measures academic achievements. Aim: This paper describes a template that was developed for developing blueprint and shows step by step the guidance on how to use the developed template. Developing the template: The template was designed on an excel software with preset formulae and linked cells to enable academician to construct an exam blueprint in an easy and simple way. It is composed of 2 main sheets: “mother-sheet” which considered as the database for the course specifications and a feeder to all other sheets, the other component is the “exam blueprint sheet” which is specific for each test. Using the exam blueprint template: Following simple steps of filling-up specific cells in the mother and exam blueprint sheet will enable the users to produce a well-constructed plan for the exam. Summary: The aim of blueprinting is to reduce any threats to validity, yet the preparation of high quality blueprint might be a huge task for most faculty. Using this template is a solid foundation for developing any multiple choice questions tests in an easy and simple way.


Author(s):  
Le Thai Hung ◽  
Nguyen Thi Quynh Giang ◽  
Tang Thi Thuy ◽  
Tran Lan Anh ◽  
Nguyen Tien Dung ◽  
...  

Computerized Adaptive Testing - CAT is a form of assessment test which requires fewer test questions to arrive at precise measurements of examinees' ability. One of the core technical components in building a CAT is mathematical algorithms which estimate examinee's ability and select the most appropriate test questions for those estimates. Those mathematical algorithms serve as a locomotive in operating the system of adaptive multiple-choice questions on computers.  Our research aims to develop essential mathematical algorithms to a computerised system of adaptive multiple-choice tests. We also build a question bank of 500 multiple-choice questions standardised by IRT theory with the difficulty level follows the normal distribution satisfying Kolmogorov-Smirnov test, to measure the mathematical ability of students in grade 10. The initial outcome of our experiment of the question bank shows: the question bank satisfies the requirements from a psychometric model and the constructed mathematical algorithms meets the criteria to apply in computerised adaptive testing.


Author(s):  
Eva Ratihwulan

<p>This research is aimed to improve students' motivation and learning outcomes. The action used in improving the two things is the learning process with STAD technique. This study uses three stages: Pre Cycle, Cycle I, and Cycle II, each cycle using two meetings. Research data were obtained from a student questionnaire and test with multiple choice questions. Questionnaire to know the development of learning motivation, while the test with multiple choice questions to know the development of learning outcomes. The research data is analyzed descriptively-qualitative. The results of the study explain that the implementation of learning with STAD techniques can improve student's motivation and achievement. Learning motivation in Pre Cycle obtained an average of 23.47 (medium category). While in Cycle I the average learning motivation increased to 26.57 (medium category). Percentage of students achieved high category score of 4 students (13%). Furthermore, in cycle II the average learning motivation of 33.87 (high category). Percentage of students obtained a great and very high category score of 22 students (73.3%). Thus until the end of Cycle II, learning motivation has increased. While the results of learning on the Precycle obtained an average of 61.93 (enough category), in the first cycle increased to 69.53 (enough class), in Cycle II obtained an average of 78.77 (right type). Furthermore, it is known that the percentage of learning mastery in the Pre Cycle of 3%, Cycle I of 31%, then in cycle II the rate of learning mastery increased to 86.66%. Thus until the end of Cycle II, student learning outcomes have increased. Based on the above data can be concluded that the implementation of learning STAD technique can improve learning motivation and student learning outcomes.</p>


Author(s):  
José Azevedo ◽  
Ema Patrícia Oliveira ◽  
Patrícia Damas Beites

The use of information and communication technologies (ICT) in the assessment process is becoming an asset, giving rise to the so-called computer-based assessment or e-assessment. Nowadays, its use is becoming more usual in higher education institutions. Closed formats for questions, namely multiple choice, are the most commonly used. This chapter presents a literature review of the main aspects related to this topic, including the main modalities of assessment (summative assessment and continuous assessment). Issues related to multiple choice questions (MCQ) are discussed with more detail, referring to the various formats of MCQ, its advantages and limitations, with a particular focus on its use in mathematics tests. Also, some guidelines for the quality assurance of MCQ with quality are included.


2002 ◽  
Vol 39 (2) ◽  
pp. 91-99 ◽  
Author(s):  
Joanna Bull ◽  
Carol Collins

This paper presents a snapshot of the findings from the National Survey (1999) into CAA activity in higher education and gives an overview of the usage of CAA in the engineering sector. It offers an insight into the ways in which technology and objective tests can be used to assess a range of learning outcomes.


2019 ◽  
Vol 1 ◽  
pp. 35-43
Author(s):  
Ghulam Abbas ◽  
◽  
Sadruddin Bahadur Qutoshi ◽  
Dil Angaiz ◽  
◽  
...  

This study aims to explore teachers’ perceptions and practices of the use of rubrics in assessing students’ learning in the context of higher education institutions in Gilgit-Baltistan. A case study method of inquiry within a qualitative paradigm was adopted to collect the relevant data through semi-structured interviews from three purposefully selected teacher-educators (instructors) and six student-teachers (prospective teachers) of semester III and IV from one of the colleges of education. The data were analyzed through thematic analysis and following themes were emerged: (1) the importance of assessment rubrics in teaching and learning processes, (2) effectiveness of rubrics in assessing teaching and learning, (3) coconstruction of assessment rubrics by student-teachers and teacher-educators, and (4) the challenges for student-teachers and teacher-educators in developing and using of assessment rubrics. From the discussion on the emerging themes, it is concluded that (a) use of assessment rubrics makes assessment process more meaningful to both teacher-educators and students-teachers; and (b) use of rubrics makes student-teachers and teacher-educators more focused on their purpose of teaching and learning outcomes. It is recommended that teacher-educators in teacher training institutions should use rubrics to assess prospective-teachers so that they, after completing their degree programs, would use similar techniques in their respective schools to assess their students’ learning outcomes effectively. Keywords: Assessment, Assessment Rubrics, Rubric Design, Teaching and Learning.


2019 ◽  
Vol 9 (2) ◽  
pp. 141-150
Author(s):  
Ery Novita Sari ◽  
Zamroni Zamroni

The independence of student learning in recent years was discussed in several articles. Through the development of an independent attitude in learning, students can diagnose learning difficulties and find the right solution to solve them. This study was aimed at finding out how the influence of learning independence on students' accounting learning outcomes. The type of research used is ex-post facto quantitative research. The population of the research is all students of class XI of Public Middle School in the city of Yogyakarta, with a total of 156 students. The instruments used were questionnaires and multiple-choice questions (MCQs). Validity and reliability of the questionnaire were measured using Confirmatory Factor Analysis (CFA) through the Lisrel 8.80 application, while the validity and reliability of MCQs were measured using Rasch approach through the Quest application. Several questionnaires in the form of questionnaires and documentation were used on the testing instrument. The number of instruments for learning independence was 19 statements. The closed statement form used a Likert scale consisting of five alternative answers. The number of MCQs is 18 questions. There were 18 valid statements found after going through the calculation of validation, reliability, difficulty level of the question, and distinguishing power. Simple regression was used for the data analysis technique. The results of the study show that the learning independence variable has a significant and positive influence. It can be seen from the learning independence variable, which has a value of 2.187 and a significance value smaller than 0.05 (0.030 <0.05).


Author(s):  
Sheila Meilina ◽  
Tarmizi Ninoersy ◽  
Salma Hayati

So far, the making of evaluation question for Arabic language lessons at MAS Ruhul Islam Anak Bangsa Aceh Besar has not been conducted in-depth qualitative research according to competency standards, especially in item construction, before being tested on students. The teacher designs the items only based on the difficulty level of the questions without looking at other aspects. Therefore, the aim of this research is to describe the accuracy of the construction of multiple choice items in Arabic lessons in class XI MAS 2019/2020 using descriptive statistical analysis methods. This type of research is qualitative research, the population in this research is 50 multiple choice questions in Arabic that designed by the teacher for class XI MAS Ruhul Islam Anak Bangsa 2019/2020. The sample of this research is 25 questions from the population and the sampling uses simple random sampling technique. Data collection techniques of this research are documentation. Qualitative analysis techniques was carried out using a format (table) designed by the researcher. The results of this study were based on samples, there were 16 questions in accordance with the research aspect of the accuracy of the construction of multiple choice items, while 9 other questions were not in accordance with the aspects of the accuracy of the construction of multiple choice items. When viewed from the aspect of construction accuracy in the total sample size, it is found that 8 aspects have been dominantly owned by all samples and 4 other aspects are not owned by the sample. And it is known that the 50 questions designed by the teacher belong to the types of distracters questions.


2021 ◽  
Vol 5 (2) ◽  
pp. 231
Author(s):  
Ni Putu Sri Diah Anggraeni ◽  
Gede Wira Bayu ◽  
I Gde Wawan Sudatha

This study aimed to develop an instrument for assessing science learning outcomes based on HOTS (Higher Order Thinking Skills). This research was development research that uses a 4D model (four-D model) with development stages consisting of define, design, develop, and distribute. This development research was only carried out until the development stage. The subject of this research was the HOTS-based science learning outcome assessment instrument in the form of grids and multiple-choice test sheets. Data were obtained by using interviews, observation, and test methods. The validation of the assessment instrument was carried out by two material experts using a validation sheet and 78 students for a limited trial using multiple-choice objective test instruments. The results obtained will be analyzed for validity, reliability, discriminatory power, level of difficulty, and quality of distractors. The results of the analysis of the HOTS-based science learning outcome assessment instrument had a validity of 0.90 which was in the very high category, reliability of 0.81 which was in the very high category. Distinguishing analysis obtained 2 items with very good criteria, 14 items with good criteria, and 9 items with sufficient criteria. In the difficulty level test, 12 questions were obtained in the easy category and 13 questions in the medium category. Analysis of the distractor quality test found 63 distractors were at the >5% level, which means the distractors are functioning well and 12 distractors were at the 5% level, which means the distractors were not functioning properly. These results indicated that the HOTS-based science learning outcome test assessment instrument developed was valid and reliable and is suitable for use as an assessment instrument in various styles of material.


Author(s):  
Claire Gwinnett ◽  
John Cassella ◽  
Mike Allen

Multiple Choice Questions (MCQs) are a very well known, traditional and accepted method of assessment. The use of MCQs for testing students has produced numerous debates amongst academics concerning their effectiveness as they are viewed as practical and efficient but also perceived as possibly „too easy‟ and potentially unable to appropriately test the higher order cognitive skills that essay questions can assess.The use of MCQs in a forensic science context is currently being investigated, not only for use within forensic science education, but also for the testing of competency of qualified forensic practitioners. This paper describes a Higher Education Academy funded project that is investigating the design and the implementation of MCQs for testing forensic practitioners and the lessons that have been learnt so far, that will assist academics in the development of robust MCQ assessments within forensic science degrees to promote and assess deep learning.


Sign in / Sign up

Export Citation Format

Share Document