scholarly journals An analysis of multiple choice questions (MCQs): Item and test statistics from mathematics assessments in senior high school

2018 ◽  
Vol 4 (1) ◽  
pp. 70-78
Author(s):  
Mutiara Kusumawati ◽  
Samsul Hadi

The multiple-choice test is a common test format used in education. One of the purposes of this test is to evaluate the success of the learning process in a particular subject. Therefore, the efficiency of the evaluation depends on the quality of the test items used. This research was conducted in order to reveal the quality of the final mathematics examination items statistically. It was descriptive quantitative research employing two-parameter logistic (2pl) model of Item Response Theory (IRT). The data were obtained from the sample of 353 students established using the purposive sampling technique. This finding shows that 40% of the 35 items tested are very difficult, 60% are in the medium level, and there is no easy item. The most difficult material is the trigonometric calculation. The percentage of the item discrimination index is described as follows: 8.57% of the items are categorized as very low, 51.43% are categorized as low, 31.43% of the items have a medium item discrimination index value, 5.71% have a high item discrimination index value, and 2.86% of the items are categorized as very high. Moreover, the research found that all distractors functioned well. The highest information on ability θ = 0.4 with information function value of 5.38 and SEM = 0.6. This test is suitable for students with the ability of -1.42 <θ <2.65.

2021 ◽  
Vol 1 (11) ◽  
pp. 735-748
Author(s):  
Hermansyah Hermansyah ◽  
Nurhendi Nurhendi

The purpose of this research is to see how flash cards influence the students' English vocabulary mastery. The researchers of this research used quantitative research methods with posttest only control design of true experimental design. The random sampling method was employed in this research's sampling. The data collected in this research was put to the test (multiple choice test). The researchers of this research gave different treatments for the experimental and control groups. The treatment for the experimental group was learning English with flash card media and the treatment for the control group was learning English conventionally. After the two sample groups were given different treatment, the two sample groups were each given a posttest with an instrument consisting of 20 multiple-choice questions with 4 alternative answer choices that had been tested beforehand and validated empirically. Analysis of research data is descriptive and inferential, which inferentially uses the t test, by first testing the assumptions of data normality and homogeneity of variance. The results showed that flash cards had an influence on students' English vocabulary mastery. Therefore, it can be said that this research has proved that there was a significant result of using Flash Cards towards children's English vocabulary mastery.


2016 ◽  
Vol 7 (2) ◽  
pp. 44
Author(s):  
Kurnia Ningsih

This research aims to describe MIPA teachers’ ability to design knowledge assessment through the analysis of achievement aspects of knowledge assessment. This research used a descriptive method with SMP MIPA teachers in Pontianak City who have taught for more than 5 years and have an undergraduate degree as the population. The samples in this research, selected using a purposive sampling technique, consisted of 12 teachers who submitted MIPA test items. The research instrument used the data of the test item document designed by the teachers in the form of a multiple-choice test. The data were analyzed descriptively which included data reduction, systematic data display, and conclusion. The results showed that of the 12 test instruments made by with 380 questions in total, the teachers’ ability to design knowledge assessment (Multiple Choice Questions) obtained 17.37% of knowledge aspect, 67.90% of understanding aspect, 8.68% of implementation aspect, and 6.05% of analysis aspect. There were no questions made related to evaluation and creation aspects. Keywords: teachers ability, designing knowledge assessment.


Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.


2019 ◽  
Vol 1 (1) ◽  
pp. 13-19
Author(s):  
Dewi Manalu ◽  
Kammer Tuahman Sipayung ◽  
Febrika Dwi Lestari

The purpose of this study was to determine the quality of the reading final examination in SMA N8 Medan grade eleventh in terms of reliability, level difficulty, discrimination power and level of distractor. This research is qualitative-quantitative research. The subject of research are the grade XI of SMA N8 Medan. Data is analyzed by ANATES program version 4.0.9. The analysis shows that: (1) items of multiple choice question that can be said as valid are 14 items ( 56%), while the invalid items amounted to 11 items ( 44%). (2) Items mutiple choice question can said as reliable because it equal 0,90 so it can said reliable. (3) items of multiple choice questions that categorized easy are 3 items (12%), satisfactory category 7 ( 28%) difficult category 2 (8%) and the other calculate categorized very easy 3 items (12%) and very difficult 3 (12%). (4) items of multiple choice questions that categorized poor are 12 items (48%),average category are 2 items (8%), good items category 1 items (4%) and excellent items are 8 items (32%).


Author(s):  
Abhijeet S. Ingale ◽  
Purushottam A. Giri ◽  
Mohan K. Doibale

Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor efficiency was overall satisfactory. Conclusions: Multiple choice questions with average difficulty and also having high discriminating power with good distracter efficiency should be incorporated into student’s examination. 


2020 ◽  
Vol 78 (4) ◽  
pp. 576-594
Author(s):  
Bing Jia ◽  
Dan He ◽  
Zhemin Zhu

The quality of multiple-choice questions (MCQs) as well as the student's solve behavior in MCQs are educational concerns. MCQs cover wide educational content and can be immediately and accurately scored. However, many studies have found some flawed items in this exam type, thereby possibly resulting in misleading insights into students’ performance and affecting important decisions. This research sought to determine the characteristics of MCQs and factors that may affect the quality of MCQs by using item response theory (IRT) to evaluate data. For this, four samples of different sizes from US and China in secondary and higher education were chosen. Item difficulty and discrimination were determined using item response theory statistical item analysis models. Results were as follows. First, only a few guessing behaviors are included in MCQ exams because all data fit the two-parameter logistic model better than the three-parameter logistic model. Second, the quality of MCQs depended more on the degree of training of examiners and less on middle or higher education levels. Lastly, MCQs must be evaluated to ensure that high-quality items can be used as bases of inference in middle and higher education. Keywords: higher education, item evaluation, item response theory, multiple-choice test, secondary education


2021 ◽  
Author(s):  
S. Mehran Hosseini ◽  
Reza Rahmati ◽  
Hamid Sepehri ◽  
Vahid Tajari ◽  
Mahdi Habibi-koolaee

Abstract Background: The purpose of this pilot was to compare the multiple-choice test statistics of medical and dental students' exams between free and tuition-paying.Methods: This descriptive-analytical study was conducted at Golestan University of Medical Sciences in Iran in 2020. The study population included students of medicine and dentistry. A total of 56 exams were selected in two student groups of free and tuition-paying admission in the physiology course. The results of quantitative evaluation of tests were used as the data of this study. The variables included difficulty index, discrimination index, the degree of difficulty, score variance, and Kuder-Richardson correlation coefficient. Results: There were 32 medical and 24 dentistry exams. The cumulative total number of questions in these exams was 437 and 330 multiple choice questions, respectively. The number of medical students participating in the free-tuition and paying-tuition admissions was 1336 and 1076, and for dental students, these numbers were 395 and 235, respectively. There were no significant differences in normalized adjusted exams scores between two admission groups in both medical and dentistry tests. The mean of discrimination index in the free-tuition group was higher than in the paying-tuition group. The interaction between the type of admission and the field of study was significant for the discrimination index. This difference was more in tuition-free dental students than tuition-free medical students and tuition-paying dental students. Conclusion: The type of student admission has no significant effect on student assessments in multiple-choice exams in matched educational conditions.


BUANA ILMU ◽  
2020 ◽  
Vol 5 (1) ◽  
pp. 158-165
Author(s):  
Yani Octafia

This research aims to determine and analyze the inflection of vocabulary mastery on students’ vocational high school in Bogor. The population observed in this study was eleventh grade students of vocational schools in Bogor. In the academic year 2019/2020.The respondents in this study were students at one of the vocational schools in Bogor. This research method uses a qualitative descriptive method, the population in this study were students of class XI Sirajul Falah Vocational School in Bogor totaling 336 students. The sampling technique in this study uses a probability sampling technique. From this population 20% of the population was taken so that the total sample was 68 students. The instrument of this research is multiple choice test. Researchers ask students to answer multiple choice test on vocabulary instrument. Indicators used to determine vocabulary mastery, namely: 1) determining words that have collected meanings / synonyms, and 2) determining words that have contradictory meanings / antonyms. 3) determining words related to the spelling 4) determining the words related to grammar. This research was carried out using tests given directly to the subjects studied in the vocabulary mastery aspect given 20 questions in the form of multiple choice. Based on the data analysis, there are 68 respondents who were given 20 multiple choice questions with 4 indicators, namely the correct answer indicator and the rest are the number of respondents' wrong answers. Based on the data analysis shows that there are 84% of questions related to true and the remaining 16% of questions cannot be answered correctly by students. So it can be concluded that there is inflection vocabulary mastery on vocabulary test. Kata kunci:  Vocabulary Mastery; Multiple Choice Test


2021 ◽  
Vol 5 (1) ◽  
pp. 1
Author(s):  
MIFTACHUL ULUM

At the end of each semester in an education unit, a learning evaluation is always carried out. One form of learning evaluation in schools is to carry out final semester exams. This study aims to determine the quality of online business lesson items for class XII SMK Sunan Drajat Lamongan in the odd semester 2017/2018 academic year in terms of validity, reliability, difficulty level and difference tests. This quantitative research is based on data from 21 student answer sheets from 25 multiple choice questions. The use of SPSS and Anates V4 software was used to measure the level of validity, reliability, difficulty level and item difference. The results showed that the validity value obtained was 15 items (60%) were declared invalid and 10 items (40%) were declared valid. Reliability of the items with a Cronbach Alpha value of 0.640> 0.6, which means that the instrument of the 25 items was declared reliable. The level of difficulty of the tested items obtained 5 items in the very easy category, 10 items in the easy category and 10 items in the moderate category. Of the 25 items, 8 items (32%) are good, 7 (28%) are sufficient and 10 (40%) are bad


Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Sign in / Sign up

Export Citation Format

Share Document