سلوك دالة المعلومات بتغير نسب فقد الاستجابة وحجم العينة في ضوء نظرية استجابة الفقرة = The Behavior of the Information Function Related to the Percentages of Missing Response and Sample Size Depending on Item Response Theory

Author(s):  
بني عواد ، علي محمد العرسان
2017 ◽  
Vol 6 (4) ◽  
pp. 113
Author(s):  
Esin Yilmaz Kogar ◽  
Hülya Kelecioglu

The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and sample size change, and then to compare the obtained results. Mathematic test in PISA 2012 was employed as the data collection tool, and 36 items were used to constitute six different data sets containing different numbers of testlets and independent items. Subsequently, from these constituted data sets, three different sample sizes of 250, 500 and 1000 persons were selected randomly. When the findings of the research were examined, it was determined that, generally the lowest mean error values were those obtained from UIRT, and TRT yielded a mean of error estimation lower than that of BIF. It was found that, under all conditions, models which take into consideration the local dependency have provided a better model-data compatibility than UIRT, generally there is no meaningful difference between BIF and TRT, and both models can be used for those data sets. It can be said that when there is a meaningful difference between those two models, generally BIF yields a better result. In addition, it has been determined that, in each sample size and data set, item and ability parameters and correlations of errors of the parameters are generally high.


2011 ◽  
Vol 31 (11-12) ◽  
pp. 1277-1290 ◽  
Author(s):  
Jean-Benoit Hardouin ◽  
Sarah Amri ◽  
Mohand-Larbi Feddag ◽  
Véronique Sébille

Author(s):  
Riswan Riswan

The Item Response Theory (IRT) model contains one or more parameters in the model. These parameters are unknown, so it is necessary to predict them. This paper aims (1) to determine the sample size (N) on the stability of the item parameter (2) to determine the length (n) test on the stability of the estimate parameter examinee (3) to determine the effect of the model on the stability of the item and the parameter to examine (4) to find out Effect of sample size and test length on item stability and examinee parameter estimates (5) Effect of sample size, test length, and model on item stability and examinee parameter estimates. This paper is a simulation study in which the latent trait (q) sample simulation is derived from a standard normal population of ~ N (0.1), with a specific Sample Size (N) and test length (n) with the 1PL, 2PL and 3PL models using Wingen. Item analysis was carried out using the classical theory test approach and modern test theory. Item Response Theory and data were analyzed through software R with the ltm package. The results showed that the larger the sample size (N), the more stable the estimated parameter. For the length test, which is the greater the test length (n), the more stable the estimated parameter (q).


2022 ◽  
Vol 12 ◽  
Author(s):  
Feifei Huang ◽  
Zhe Li ◽  
Ying Liu ◽  
Jingan Su ◽  
Li Yin ◽  
...  

Educational assessments tests are often constructed using testlets because of the flexibility to test various aspects of the cognitive activities and broad content sampling. However, the violation of the local item independence assumption is inevitable when tests are built using testlet items. In this study, simulations are conducted to evaluate the performance of item response theory models and testlet response theory models for both the dichotomous and polytomous items in the context of equating tests composed of testlets. We also examine the impact of testlet effect, length of testlet items, and sample size on estimating item and person parameters. The results show that more accurate performance of testlet response theory models over item response theory models was consistently observed across the studies, which supports the benefits of using the testlet response theory models in equating for tests composed of testlets. Further, results of the study indicate that when sample size is large, item response theory models performed similarly to testlet response theory models across all studies.


2015 ◽  
Vol 9 (1) ◽  
pp. 161
Author(s):  
Eman Rasmi Abed ◽  
Mohammad Mustafa Al-Absi ◽  
Yousef Abdelqader Abu shindi

<p class="apa">The purpose of the present study is developing a test to measure the numerical ability for students of education. The sample of the study consisted of (504) students from 8 universities in Jordan. The final draft of the test contains 45 items distributed among 5 dimensions.</p><p class="apa">The results revealed that acceptable psychometric properties of the test; items parameters (difficulty, discrimination) were estimated by item response theory IRT, the reliability of the test was assessed by: Cronbach’s Alpha, average of inter-item correlation, and test information function (IRT), and the validity of the test was assessed by: arbitrator's views, factor analysis, RMSR, and Tanaka Index.</p><p class="apa">The numerical ability test can be used to measure the strength and weaknesses in numerical ability for educational faculty students, and the test can be used to classify students on levels of numerical ability.</p>


Sign in / Sign up

Export Citation Format

Share Document