Linguistic Distance and Translation Differential Item Functioning on Trends in International Mathematics and Science Study Mathematics Assessment Items

2021 ◽  
pp. 073428292110105
Author(s):  
Semirhan Gökçe ◽  
Giray Berberoğlu ◽  
Craig S. Wells ◽  
Stephen G. Sireci

The 2015 Trends in International Mathematics and Science Study (TIMSS) involved 57 countries and 43 different languages to assess students’ achievement in mathematics and science. The purpose of this study is to evaluate whether items and test scores are affected as the differences between language families and cultures increase. Using differential item functioning (DIF) procedures, we compared the consistency of students’ performance across three combinations of languages and countries: (a) same language but different countries, (b) same countries but different languages, and (c) different languages and different countries. The analyses consisted of the detection of the number of DIF items for all paired comparisons within each condition, the direction of DIF, the magnitude of DIF, and the differences between test characteristic curves. As the countries were more distant with respect to cultures and language families, the presence of DIF increased. The magnitude of DIF was greatest when both language and country differed, and smallest when the languages were same, but the countries were different. Results suggest that when TIMSS results are compared across countries, the language- and country-specific differences which could reflect cultural, curriculum, or other differences should be considered.

2016 ◽  
Vol 4 (1) ◽  
pp. 62
Author(s):  
Jose Q. Pedrajita

This study looked into differentially functioning items in a Chemistry Achievement Test. It also<br />examined the effect of eliminating differentially functioning items on the content and concurrent validity,<br />and internal consistency reliability of the test. Test scores of two hundred junior high school students<br />matched on school type were subjected to Differential Item Functioning (DIF) analysis. One hundred<br />students came from a public school, while the other 100 were private school examinees. The<br />descriptive-comparative research design utilizing differential item functioning analysis and validity and<br />reliability analysis was employed. The Chi-Square, Distractor Response Analysis, Logistic Regression,<br />and the Mantel-Haenszel Statistic were the methods used in the DIF analysis. A six-point scale ranging<br />from inadequate to adequate was used to assess the content validity of the test. Pearson r was used in<br />the concurrent validity analysis. The KR-20 formula was used for estimating the internal consistency<br />reliability of the test. The findings revealed the presence of differentially functioning items between the<br />public and private school examinees. The DIF methods differed in the number of differentially<br />functioning items identified. However, there was a high degree of correspondence between the Logistic<br />Regression and Mantel-Haenszel Statistic. After the elimination of the differentially functioning items,<br />the content and the concurrent validity, and the internal consistency reliability differed per DIF method<br />used. The content validity of the test differed ranging from slightly adequate to moderately adequate in<br />the number of items retained. The concurrent validity of the test also differed but all were positive and<br />indicate moderate relationship between the examinees’ test scores and their GPA in Science III.<br />Likewise, the internal consistency reliability of the test differed. The more differentially functioning<br />items eliminated, the lesser was the content and concurrent validity, and internal consistency reliability<br />of the test becomes. Elimination of differentially functioning items diminishes content and concurrent<br />validity, and internal consistency reliability, but could be use as basis in enhancing content, concurrent<br />as well as internal consistency reliability by replacing eliminated DIF items.


2001 ◽  
Vol 27 (2) ◽  
Author(s):  
Pieter Schaap

The objective of this article is to present the results of an investigation into the item and test characteristics of two tests of the Potential Index Batteries (PIB) in terms of differential item functioning (DIP) and the effect thereof on test scores of different race groups. The English Vocabulary (Index 12) and Spelling Tests (Index 22) of the PIB were analysed for white, black and coloured South Africans. Item response theory (IRT) methods were used to identify items which function differentially for white, black and coloured race groups. Opsomming Die doel van hierdie artikel is om die resultate van n ondersoek na die item- en toetseienskappe van twee PIB (Potential Index Batteries) toetse in terme van itemsydigheid en die invloed wat dit op die toetstellings van rassegroepe het, weer te gee. Die Potential Index Batteries (PIB) se Engelse Woordeskat (Index 12) en Spellingtoetse (Index 22) is ten opsigte van blanke, swart en gekleurde Suid-Afrikaners ontleed. Itemresponsteorie (IRT) is gebruik om items te identifiseer wat as sydig (DIP) vir die onderskeie rassegroepe beskou kan word.


2020 ◽  
Vol 5 (1) ◽  
pp. 51-60
Author(s):  
Elizar Elizar ◽  
Cut Khairunnisak

Mathematics assessments should be designed for all students, regardless of their background or gender. Rasch analysis, developed based on Item Response Theory (IRT), is one of the primary tools to analyse the inclusiveness of mathematics assessment. However, the mathematics test development has been dominated by Classical Test Theory (CTT). This study is a preliminary study to evaluate the mathematics comprehensive test. This study aims to demonstrate the use of Rasch analysis by assessing the appropriateness of the mathematics comprehensive test to measure students' mathematical understanding. Data were collected from one cycle of mathematics comprehensive test involving 48 undergraduate students of mathematics education department. Rasch analysis was conducted using ACER Conquest 4 software to assess the item difficulty and differential item functioning (DIF). The findings show that the item related to geometry is the easiest question for students, while item concerning calculus as the hardest question. The test is viable to measure students’ mathematical understanding as it shows no evidence of Differential Item Functioning (DIF). Gender has been drawn for each of the test items. The assessment showed that the test was inclusive. More application of Rasch analysis should be conducted to create a thorough and robust mathematics assessment.


2021 ◽  
Vol 12 ◽  
Author(s):  
Linyu Liao ◽  
Don Yao

Differential Item Functioning (DIF) analysis is always an indispensable methodology for detecting item and test bias in the arena of language testing. This study investigated grade-related DIF in the General English Proficiency Test-Kids (GEPT-Kids) listening section. Quantitative data were test scores collected from 791 test takers (Grade 5 = 398; Grade 6 = 393) from eight Chinese-speaking cities, and qualitative data were expert judgments collected from two primary school English teachers in Guangdong province. Two R packages “difR” and “difNLR” were used to perform five types of DIF analysis (two-parameter item response theory [2PL IRT] based Lord’s chi-square and Raju’s area tests, Mantel-Haenszel [MH], logistic regression [LR], and nonlinear regression [NLR] DIF methods) on the test scores, which altogether identified 16 DIF items. ShinyItemAnalysis package was employed to draw item characteristic curves (ICCs) for the 16 items in RStudio, which presented four different types of DIF effect. Besides, two experts identified reasons or sources for the DIF effect of four items. The study, therefore, may shed some light on the sustainable development of test fairness in the field of language testing: methodologically, a mixed-methods sequential explanatory design was adopted to guide further test fairness research using flexible methods to achieve research purposes; practically, the result indicates that DIF analysis does not necessarily imply bias. Instead, it only serves as an alarm that calls test developers’ attention to further examine the appropriateness of test items.


2009 ◽  
Vol 104 (2) ◽  
pp. 439-446 ◽  
Author(s):  
J. Daniel House

Recent mathematics assessment findings indicate that Native American students tend to score below students of the ethnic majority. Findings suggest that students' beliefs about mathematics are significantly related to achievement outcomes. This study examined relations between self-beliefs and mathematics achievement for a national sample of 130 Grade 8 Native American students from the Trends in International Mathematics and Science Study (TIMSS) 2003 United States sample of ( M age =14.2 yr., SD = 0.5). Multiple regression indicated several significant relations of mathematics beliefs with achievement and accounted for 26.7% of the variance in test scores. Students who earned high test scores tended to hold more positive beliefs about their ability to learn mathematics quickly, while students who earned low scores expressed negative beliefs about their ability to learn new mathematics topics.


Diagnostica ◽  
2021 ◽  
Vol 67 (1) ◽  
pp. 13-23
Author(s):  
Ariana Garrote ◽  
Elisabeth Moser Opitz

Zusammenfassung. In dieser Studie wurde der Test MARKO-D (Mathematik- und Rechenkonzepte im Vorschulalter–Diagnose) mit einer Stichprobe von Kindern aus der deutschsprachigen Schweiz ( N = 555) im ersten und zweiten Kindergartenjahr erprobt und es wurde analysiert, ob sich die Altersnormen der deutschen Stichprobe auf die Schweiz übertragen lassen. Zudem wurde der Test mit einer Teilstichprobe ( n = 87) hinsichtlich Messinvarianz über die Zeit untersucht. Die Ergebnisse des eindimensionalen Rasch-Modells zeigen, dass das Instrument für die Schweiz geeignet ist. Die Testleistungen hängen jedoch vom Kindergartenbesuch ab. Für die Schweiz müssten deshalb nebst Altersnormen auch Normen pro Kindergartenhalbjahr verwendet werden. Die Analyse mittels Differential Item Functioning ergab, dass 17 von 55 Items von großer Messvarianz über die Zeit betroffen sind. Um das Instrument für Längsschnittuntersuchungen einsetzen zu können, müsste es weiterentwickelt werden.


2019 ◽  
Vol 35 (6) ◽  
pp. 823-833 ◽  
Author(s):  
Desiree Thielemann ◽  
Felicitas Richter ◽  
Bernd Strauss ◽  
Elmar Braehler ◽  
Uwe Altmann ◽  
...  

Abstract. Most instruments for the assessment of disordered eating were developed and validated in young female samples. However, they are often used in heterogeneous general population samples. Therefore, brief instruments of disordered eating should assess the severity of disordered eating equally well between individuals with different gender, age, body mass index (BMI), and socioeconomic status (SES). Differential item functioning (DIF) of two brief instruments of disordered eating (SCOFF, Eating Attitudes Test [EAT-8]) was modeled in a representative sample of the German population ( N = 2,527) using a multigroup item response theory (IRT) and a multiple-indicator multiple-cause (MIMIC) structural equation model (SEM) approach. No DIF by age was found in both questionnaires. Three items of the EAT-8 showed DIF across gender, indicating that females are more likely to agree than males, given the same severity of disordered eating. One item of the EAT-8 revealed slight DIF by BMI. DIF with respect to the SCOFF seemed to be negligible. Both questionnaires are equally fair across people with different age and SES. The DIF by gender that we found with respect to the EAT-8 as screening instrument may be also reflected in the use of different cutoff values for men and women. In general, both brief instruments assessing disordered eating revealed their strengths and limitations concerning test fairness for different groups.


Sign in / Sign up

Export Citation Format

Share Document