scholarly journals STANDARISASI INSTRUMEN INTEGRATED ASSESSMENT HASIL BELAJAR BAHASA DENGAN PROGRAM QUEST

LITERA ◽  
2014 ◽  
Vol 13 (2) ◽  
Author(s):  
Pujiati Suyata ◽  
Nur Hidayanto ◽  
Agus Widyantoro

This study aims to produce a model of standardized convergent and divergentintegrated-assessment instruments for Indonesian and English learning outcomes atjunior high schools using Item Response Theory. It employed a research and developmentdesign. In the first year, a learning continuum was developed, followed by the developmentof integrated assessment instruments based on the learning continuum. Instrumentstandardization was then conducted using 1-PL Item Response Theory and the QUESTprogram. The results of the first year were 30 standardized integrated assessmentinstrument sets, a guidebook for construction of the integrated assessment instruments forlanguage learning outcomes and the QUEST program guidebook. In the second year, theintegrated assessment instruments for language learning outcomes were disseminated inprovinces of Yogyakarta Special Territory, South Kalimantan, and West Nusa Tenggara.

2021 ◽  
Vol 23 (2) ◽  
pp. 201-208
Author(s):  
Wahyuni Wahyuni ◽  
Muhammad Fahmi

Analysis of the quality of the items is carried out so that the questions made actually consist of quality items in order to measure learning outcomes. Currently, there are still many teachers who have not analyzed the questions they made because they think that analyzing the questions takes a long time and takes a lot of energy. As a result, many items used cannot produce correct data about student learning outcomes. Therefore, a Question Item Analysis System was made using the Website-based Item Response Theory method to determine the quality of the items presented. In this study, a website-based item analysis system was developed using the item response theory method with a 3PL approach model. This system is made one package with CBT. Responses from participants to analyze the items are obtained automatically when participants take the CBT test


2019 ◽  
Vol 4 (9) ◽  
pp. 157-164
Author(s):  
Ioannis Katsenos ◽  
Spyros Papadakis ◽  
George S. Androulakis

Assessment of an educational program/course, based on quantitative data, is attempted in this study, by using the final deliverables of the trainees and assess them according to a predefined set of items connected to the desired Learning Outcomes and a predefined scale for each item. The statistical analysis of the items’ grades, first using factor analysis and then using an Item Response Theory model, gives an indication of the Learning Outcomes’ degree of achievement and consequently guides the training designers to modify training strategies for a potential next cycle of the training program/course. For this study, a teacher training course on flipped classroom methodology, has been used and the above concept was tested. Our analysis using Item Response Theory, revealed the Learning Outcomes partially or not at all achieved showing very good agreement with trainers’ intuitive observations. For the future, the use of such a quantitative assessment could involve Structural Equation Modelling (SEM) tools to assess the relations among learning outcomes, prior knowledge and teaching practices and temporal analysis during training course execution using not only final data but also data from intermediate phases.


Author(s):  
Yuemei Liu ◽  
Xuetao Zhao

The popularity of computer technology in English teaching has led to the establishment of many English learning platforms, but the enhancement of students’ English proficiency is limited due to the lack of relevance, self-adaptive test questions and analytical ability. The project management theory is introduced into English learning, which can provide students with teaching content and test questions that are more suitable for their own actual situation through a more intelligent, personalized way. At the same time, the static and dynamic database model based on students’ own learning behavior is constructed to facilitate storage of students’ learning record. Combined with the advantages of hierarchical selection, SH method and improved polynomial model, this paper puts forward a new type of item section model. This paper introduces the basic theory and related technology, and then makes an in-depth study on the demand analysis of English learning system. Finally, this paper realizes the design of English learning system based on item response theory and validates the good effect of English item selection from the perspective of application. The system provides teachers and students with convenient learning strategies, item selection strategies, test strategies and academic performance strategies. The introduction of item response theory enables the system to become truly student-centered and provides a more comprehensive and self-adaptive learning model, which is of great significance for improving the learning efficiency of English learning and the learning efficiency of college students in China.


2020 ◽  
Vol 10 (2) ◽  
pp. 259
Author(s):  
Yusuf F. Zakariya ◽  
Hans Kristian Nilsen ◽  
Simon Goodchild ◽  
Kirsten Bjørkestøl

The importance of students’ prior knowledge to their current learning outcomes cannot be overemphasised. Students with adequate prior knowledge are better prepared for the current learning materials than those without the knowledge. However, assessment of engineering students' prior mathematics knowledge has been beset with a lack of uniformity in measuring instruments and inadequate validity studies. This study attempts to provide evidence of validity and reliability of a Norwegian national test of prior mathematics knowledge using an explanatory sequential mixed-methods approach. This approach involves use of an item response theory model followed by cognitive interviews of some students among 201 first-year engineering students that constitute the sample of the study. The findings confirm an acceptable construct validity for the test with reliable items and a high-reliability coefficient of .92 on the whole test. Mixed results are found on discrimination and difficulty indices of questions on the test with some questions having unacceptable discriminations and require improvement, some are easy, and some appear too tricky questions for students. Results from the cognitive interviews reveal the likely reasons for students' difficulty on some questions to be lack of proper understanding of the questions, text misreading, improper grasping of word-problem tasks, and unavailability of calculators. The findings underscore the significance of validity and reliability checks of test instruments and their effect on scoring and computing aggregate scores. The methodological approaches to validity and reliability checks in the present study can be applied to other national contexts.


2001 ◽  
Vol 46 (6) ◽  
pp. 629-632
Author(s):  
Robert J. Mislevy

Sign in / Sign up

Export Citation Format

Share Document