International assessments as the comparative desires and the distributions of differences: infrastructures and coloniality

Author(s):  
Thomas S. Popkewitz
2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Anastasios Karakolidis ◽  
Alice Duggan ◽  
Gerry Shiel ◽  
Joanne Kiniry

An amendment to this paper has been published and can be accessed via the original article.


2015 ◽  
Vol 5 (2) ◽  
pp. 49-54
Author(s):  
Paul Ichim ◽  
Iuliana Barna ◽  
Mircea Dragu

According to some international assessments, the rate of autism spectrum disorder cases is 1 to 68 children. There are approximately 67 million autistic people in the worlds, and 4 out of 5 children are boys.The alarmingly increasing rate, as well as the impossibility to prevent this disorder, as its causes are not completely clear, the diversity of its symptoms, the precarious social integration and the big number of ineffectual therapies are the key elements that have determined us to pursue this research. The aim of this study is that of demonstrating that the multisystemic therapy (MST) in water and the cognitive therapy play an important role in the multidisciplinary process of recovering and integrating the autistic children in society. Keywords: autism, deviant behavior, alternative therapy, psychomotor education.


2015 ◽  
Vol 117 (1) ◽  
pp. 1-28 ◽  
Author(s):  
Kadriye Ercikan ◽  
Wolff-Michael Roth ◽  
Mustafa Asil

Background/Context Two key uses of international assessments of achievement have been (a) comparing country performances for identifying the countries with the best education systems and (b) generating insights about effective policy and practice strategies that are associated with higher learning outcomes. Do country rankings really reflect the quality of education in different countries? What are the fallacies of simply looking to higher performing countries to identify strategies for improving learning in our own countries? Purpose In this article we caution against (a) using country rankings as indicators of better education and (b) using correlates of higher performance in high ranking countries as a way of identifying strategies for improving education in our home countries. We elaborate on these cautions by discussing methodological limitations and by comparing five countries that scored very differently on the reading literacy scale of the 2009 PISA assessment. Population We use PISA 2009 reading assessment for five countries/jurisdictions as examples to elaborate on the problems with interpretation of international assessments: Canada, Shanghai-China, Germany, Turkey, and the US, i.e., countries from three continents that span the spectrum of high, average, and low ranking countries and jurisdictions. Research Design Using the five jurisdiction data in an exemplary fashion, our analyses focus on the interpretation of country rankings and correlates of reading performance within countries. We first examine the profiles of these jurisdictions with respect to high school graduation rates, school climate, student attitudes and disciplinary climate and how these variables are related to reading performance rankings. We then examine the extent to which two predictors of reading performance, reading enjoyment and out of school enrichment activities, may be responsible for higher performance levels. Conclusions This article highlights the importance of establishing comparability of test scores and data across jurisdictions as the first step in making international comparisons based on international assessments such as PISA. When it comes to interpreting jurisdiction rankings in international assessments, researchers need to be aware that there is a variegated and complex picture of the relations between reading achievement ranking and rankings on a number of factors that one might think to be related individually or in combination to quality of education. This makes it highly questionable to use reading score rankings as a criterion for adopting educational policies and practices of other jurisdictions. Furthermore, reading scores vary greatly for different student sub-populations within a jurisdiction – e.g., gender, language, and cultural groups – that are all part of the same education system in a given jurisdiction. Identifying effective strategies for improving education using correlates of achievement in high performing countries should be also done with caution. Our analyses present evidence that two factors, reading enjoyment and out of school enrichment activities, cannot be considered solely responsible for higher performance levels. The analyses suggests that the PISA 2009 results are variegated with regards to attitudes towards reading and out-of-school learning experience, rather than exhibiting clear differences that might explain the different performances among the five jurisdictions.


2015 ◽  
Vol 117 (1) ◽  
pp. 1-28
Author(s):  
Kadriye Ercikan ◽  
Wolff-Michael Roth ◽  
Mustafa Asil

Background/Context Two key uses of international assessments of achievement have been (a) comparing country performances for identifying the countries with the best education systems and (b) generating insights about effective policy and practice strategies that are associated with higher learning outcomes. Do country rankings really reflect the quality of education in different countries? What are the fallacies of simply looking to higher performing countries to identify strategies for improving learning in our own countries? Purpose In this article we caution against (a) using country rankings as indicators of better education and (b) using correlates of higher performance in high ranking countries as a way of identifying strategies for improving education in our home countries. We elaborate on these cautions by discussing methodological limitations and by comparing five countries that scored very differently on the reading literacy scale of the 2009 PISA assessment. Population We use PISA 2009 reading assessment for five countries/jurisdictions as examples to elaborate on the problems with interpretation of international assessments: Canada, Shanghai-China, Germany, Turkey, and the US, i.e., countries from three continents that span the spectrum of high, average, and low ranking countries and jurisdictions. Research Design Using the five jurisdiction data in an exemplary fashion, our analyses focus on the interpretation of country rankings and correlates of reading performance within countries. We first examine the profiles of these jurisdictions with respect to high school graduation rates, school climate, student attitudes and disciplinary climate and how these variables are related to reading performance rankings. We then examine the extent to which two predictors of reading performance, reading enjoyment and out of school enrichment activities, may be responsible for higher performance levels. Conclusions This article highlights the importance of establishing comparability of test scores and data across jurisdictions as the first step in making international comparisons based on international assessments such as PISA. When it comes to interpreting jurisdiction rankings in international assessments, researchers need to be aware that there is a variegated and complex picture of the relations between reading achievement ranking and rankings on a number of factors that one might think to be related individually or in combination to quality of education. This makes it highly questionable to use reading score rankings as a criterion for adopting educational policies and practices of other jurisdictions. Furthermore, reading scores vary greatly for different student sub-populations within a jurisdiction – e.g., gender, language, and cultural groups – that are all part of the same education system in a given jurisdiction. Identifying effective strategies for improving education using correlates of achievement in high performing countries should be also done with caution. Our analyses present evidence that two factors, reading enjoyment and out of school enrichment activities, cannot be considered solely responsible for higher performance levels. The analyses suggests that the PISA 2009 results are variegated with regards to attitudes towards reading and out-of-school learning experience, rather than exhibiting clear differences that might explain the different performances among the five jurisdictions.


Author(s):  
Dave Bartram ◽  
Fons J. R. van de Vijver

Chapter 30 focuses on issues relating to norm-referenced measures, in particular the use of norms in international assessments. This chapter highlights some of the complex issues involved in norming scores. While the initial sections of the chapter review some general issues of norm construction and use, this is not a chapter on the mechanics of how to produce norms. Rather, it focuses on issues of when and how to use norms, what aggregations of samples to base them on, and how norm-referenced scores should be interpreted. In particular, it considers issues relating to the development and use of international norms. Test norms are often essential for stakeholders to understand the meaning of test scores by providing information about the standing of the test taker relative to other members of the population. Finally, the chapter notes that culturally related variance may reflect either measurement bias or effects of cultural style.


2019 ◽  
Vol 44 (6) ◽  
pp. 752-781
Author(s):  
Michael O. Martin ◽  
Ina V.S. Mullis

International large-scale assessments of student achievement such as International Association for the Evaluation of Educational Achievement’s Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study and Organization for Economic Cooperation and Development’s Program for International Student Assessment that have come to prominence over the past 25 years owe a great deal in methodological terms to pioneering work by National Assessment of Educational Progress (NAEP). Using TIMSS as an example, this article describes how a number of core techniques, such as matrix sampling, student population sampling, item response theory scaling with population modeling, and resampling methods for variance estimation, have been adapted and implemented in an international context and are fundamental to the international assessment effort. In addition to the methodological contributions of NAEP, this article illustrates how the large-scale international assessments go beyond measuring student achievement by representing important aspects of community, home, school, and classroom contexts in ways that can be used to address issues of importance to researchers and policymakers.


Sign in / Sign up

Export Citation Format

Share Document