scholarly journals Resultados brasileiros no PISA e seus (des)usos

2017 ◽  
Vol 28 (68) ◽  
pp. 344 ◽  
Author(s):  
Maria de Lourdes Haywanon Santos Araújo ◽  
Robinson Moreira Tenório

<p>O objetivo desta pesquisa consistiu em analisar como foram utilizados os resultados do Programa Internacional de Avaliação de Estudantes (PISA) no contexto educacional brasileiro. A revisão de literatura permitiu apontar a avaliação como um fator fundamental para a qualificação da educação, elaborar um panorama das pesquisas sobre o PISA no Brasil, além de propiciar discussões sobre a necessidade do uso dos resultados das avaliações em larga escala. A partir da análise documental e de entrevistas semiestruturadas, foi possível não apenas apresentar um estudo sobre o uso dos resultados do PISA no país, mas também estabelecer categorias de usos como o Uso Indevido ou Não Uso, apresentando as possibilidades e dificuldades dessa utilização e o papel dos gestores nesse processo.</p><p><strong>Palavras-chave:</strong> Pisa; Uso de Resultados; Avaliação Educacional; Políticas Públicas.</p><p> </p><p><strong><em>Resultados brasileños en el PISA y sus (des)usos</em></strong></p><p><em>El objetivo de este estudio consistió en analizar cómo se utilizaron los resultados del Programa Internacional de Evaluación de Estudiantes (PISA) en el marco educacional brasileño. La revisión de literatura permitió que la evaluación se considerase como un factor fundamental para la cualificación de la educación y se elaborase un panorama de las investigaciones sobre PISA en Brasil, además de propiciar discusiones sobre la necesidad del uso de los resultados de las evaluaciones en gran escala. A partir del análisis documental y de entrevistas semiestructuradas, se hizo posible no solo presentar un estudio sobre el uso de los resultados de PISA en el país, sino también establecer categorías de usos, como el Uso Indebido o No Uso, presentando las posibilidades y dificultades de dicha utilización y el papel de los gestores en este proceso.</em></p><p><em><strong>Palabras-clave:</strong> PISA; Uso de Resultados; Evaluación Educacional; Políticas Públicas.</em></p><p><em> </em></p><p><strong><em>Brazilian results in PISA and its (mis)uses</em></strong></p><p><em>The objective of this study was to analyze how the results of the Program for International Student Assessment (PISA) were used in the Brazilian educational context. The literature review showed that assessment is a fundamental factor for the qualification of education, for elaborating an overview of the PISA studies in Brazil, as well as for promoting discussions about the need to use the results of evaluations on a large scale. Based on the documentary analysis and semi-structured interviews, it was possible not only to present a study on the use of the PISA results in the country but also to establish categories of uses, such as Improper Usage or Lack of Usage, showing the possibilities and difficulties of such use and the administrators’ role in this process.</em></p><p><em><strong>Keywords:</strong> PISA; Use of Results; Educational Assessment; Public Policies.</em></p>

2017 ◽  
Vol 28 (68) ◽  
pp. 512
Author(s):  
Lenice Medeiros ◽  
Alexandre Jaloto ◽  
André Vitor Fernades dos Santos

<p>O artigo procura abordar os aspectos pedagógicos das avaliações internacionais que contam com a área de Ciências, focalizando especialmente a última edição do Programa Internacional de Avaliação de Estudantes (PISA) e o Terceiro Estudo Regional Comparativo e Explicativo (TERCE). São apresentados e discutidos os alicerces conceituais e procedimentais desses estudos e alguns resultados relativos ao desempenho dos estudantes brasileiros. Nesse sentido, problematizam-se os limites e as possibilidades de uso desses dados para a formulação de políticas educacionais que impactam o ensino de Ciências, tais como a Base Nacional Curricular Comum (BNCC) e as avaliações previstas no Plano Nacional da Educação (PNE).</p><p><strong>Palavras-chave:</strong> Avaliação em Larga Escala; Ensino de Ciências; Pisa; Terce.</p><p>  </p><p><strong>El área de ciencias en las evaluaciones internacionales de gran escala</strong></p><p>El artículo pretende abordar los aspectos pedagógicos de las evaluaciones internacionales que cuentan con el área de Ciencias, enfocando especialmente la última edición del Programa Internacional de Evaluación de Estudiantes (PISA) y el Tercer Estudio Regional Comparativo y Explicativo (TERCE). Se presentan y discuten las bases conceptuales y procedimentales de estos estudios y algunos resultados relativos al desempeño de los estudiantes brasileños. En este sentido se problematizan los límites y las posibilidades de uso de estos datos para la formulación de políticas educacionales que impactan la enseñanza de Ciencias, como la Base Nacional Curricular Común (BNCC) y las evaluaciones previstas en el Plan Nacional de la Educación (PNE).</p><p><strong>Palabras-clave:</strong> Evaluación en Gran Escala; Enseñanza de Ciencias; Pisa; Terce.</p><p> </p><p><strong>The science field in international large-scale assessments</strong></p><p>This article aims to address the pedagogical aspects of international assessments that include the Science field, focusing especially on the last edition of the Program for International Student Assessment (PISA) and the Third Regional Comparative and Explanatory Study (TERCE). The procedural and conceptual foundations of these studies and some results concerning the performance of Brazilian students are presented and discussed here. Therefore, the limits and the possibilities of using these data for the formulation of educational policies that impact the teaching of sciences, such as the Common National Curricular Base (BNCC) and the assessments provided for in the National Education Plan (PNE) are discussed.</p><p><strong>keywords:</strong> Large-Scale Assessment; Science Education; Pisa; Terce</p>


2017 ◽  
Vol 28 (68) ◽  
pp. 478
Author(s):  
Andrea Mara Vieira

<p>A nossa proposta é investigar a existência ou não de sintonia entre o conceito acadêmico de letramento científico e aquele previsto nos documentos do Programme for International Student Assessment (PISA) e nas normas educacionais. A despeito de toda complexidade e polissemia conceitual existente em torno do conceito de alfabetização/letramento científico, desenvolvemos uma análise teórico-comparativa desse conceito na forma como é concebido pelos especialistas, em comparação com o conceito de letramento científico previsto na base avaliativa do PISA 2015, considerando também a previsão normatizada pelas políticas públicas educacionais. Ao final, identificamos  menos  acordes  e, por variados motivos, mais dissonâncias, que podem servir como contributo para uma reflexão sobre a validade e  relevância  do PISA enquanto instrumento de avaliação, bem como sobre o tipo de aprendizagem a ser assegurada pelo nosso sistema educacional.</p><p><strong>Palavras-chave:</strong> Letramento Científico; Pisa; Políticas Públicas; Avaliação em Larga Escala.</p><p> </p><p><strong>Acordes y disonancias del letramento científico propuesto por el PISA 2015</strong></p><p>Nuestra propuesta es investigar la existencia o no de sintonía entre el concepto académico de letramento científico y el previsto en los documentos del Programme for International Student Assessment (PISA) y en las normas educacionales. A pesar de toda la complejidad y polisemia conceptual existentes en torno al concepto de alfabetización/letramento científico, desarrollamos un análisis teórico-comparativo de dicho concepto en la forma como es concebido por los especialistas, en comparación con el concepto de letramento científico previsto en la base evaluativa del PISA 2015, considerando también la previsión normalizada por las políticas públicas educacionales. Al final, identificamos menos acordes y, por variados motivos, más disonancias, que pueden servir como contribución para una reflexión sobre la validad y relevancia del PISA como instrumento de evaluación, así como sobre el tipo de aprendizaje que nuestro sistema educacional debe asegurar.</p><p><strong>Palabras-clave:</strong> Letramento Científico; Pisa; Políticas Públicas; Evaluación en Gran Escala.</p><p> </p><p><strong>Chords and dissonances of scientific literacy proposed by PISA 2015</strong></p><p>Our proposal is to investigate the harmony or lack of it between the academic concept of scientific literacy and the one stated in the documents of the Program for International Student Assessment (PISA) and in educational standards. Despite all complexity and conceptual polysemy around the concept of literacy/scientific literacy, we developed a theoretical comparative analysis of this concept as designed by experts, comparing it to the concept of scientific literacy laid down on the assessment basis of the PISA 2015, considering also the projection standardized by public educational policies. Finally, we identified less chords, and, for various reasons, more dissonance, that can serve as a contribution to discuss the validity and relevance of PISA as an assessment tool, as well as on the type of learning to be ensured by our educational system.</p><p><strong>Keywords:</strong> Scientific Literacy; Pisa; Public Policies; Large-Scale Assessment.</p>


Author(s):  
Dani Gamerman ◽  
Tufi M. Soares ◽  
Flávio Gonçalves

This article discusses the use of a Bayesian model that incorporates differential item functioning (DIF) in analysing whether cultural differences may affect the performance of students from different countries in the various test items which make up the OECD’s Programme for International Student Assessment (PISA) test of mathematics ability. The PISA tests in mathematics and other subjects are used to compare the educational attainment of fifteen-year old students in different countries. The article first provides a background on PISA, DIF and item response theory (IRT) before describing a hierarchical three-parameter logistic model for the probability of a correct response on an individual item to determine the extent of DIF remaining in the mathematics test of 2003. The results of Bayesian analysis illustrate the importance of appropriately accounting for all sources of heterogeneity present in educational testing and highlight the advantages of the Bayesian paradigm when applied to large-scale educational assessment.


Methodology ◽  
2007 ◽  
Vol 3 (4) ◽  
pp. 149-159 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Ulrich Trautwein ◽  
Frauke Kreuter ◽  
Jan Marten Ihme

Abstract. In large-scale educational assessments such as the Third International Mathematics and Sciences Study (TIMSS) or the Program for International Student Assessment (PISA), sizeable numbers of test administrators (TAs) are needed to conduct the assessment sessions in the participating schools. TA training sessions are run and administration manuals are compiled with the aim of ensuring standardized, comparable, assessment situations in all student groups. To date, however, there has been no empirical investigation of the effectiveness of these standardizing efforts. In the present article, we probe for systematic TA effects on mathematics achievement and sample attrition in a student achievement study. Multilevel analyses for cross-classified data using Markov Chain Monte Carlo (MCMC) procedures were performed to separate the variance that can be attributed to differences between schools from the variance associated with TAs. After controlling for school effects, only a very small, nonsignificant proportion of the variance in mathematics scores and response behavior was attributable to the TAs (< 1%). We discuss practical implications of these findings for the deployment of TAs in educational assessments.


2021 ◽  
Vol 33 (1) ◽  
pp. 139-167
Author(s):  
Andrés Strello ◽  
Rolf Strietholt ◽  
Isa Steinmann ◽  
Charlotte Siepmann

AbstractResearch to date on the effects of between-school tracking on inequalities in achievement and on performance has been inconclusive. A possible explanation is that different studies used different data, focused on different domains, and employed different measures of inequality. To address this issue, we used all accumulated data collected in the three largest international assessments—PISA (Programme for International Student Assessment), PIRLS (Progress in International Reading Literacy Study), and TIMSS (Trends in International Mathematics and Science Study)—in the past 20 years in 75 countries and regions. Following the seminal paper by Hanushek and Wößmann (2006), we combined data from a total of 21 cycles of primary and secondary school assessments to estimate difference-in-differences models for different outcome measures. We synthesized the effects using a meta-analytical approach and found strong evidence that tracking increased social achievement gaps, that it had smaller but still significant effects on dispersion inequalities, and that it had rather weak effects on educational inadequacies. In contrast, we did not find evidence that tracking increased performance levels. Besides these substantive findings, our study illustrated that the effect estimates varied considerably across the datasets used because the low number of countries as the units of analysis was a natural limitation. This finding casts doubt on the reproducibility of findings based on single international datasets and suggests that researchers should use different data sources to replicate analyses.


2011 ◽  
Vol 10 (4) ◽  
pp. 611-622 ◽  
Author(s):  
Radhika Gorur

In this article, the author tells the story of her search for appropriate tools to conceptualise policy work. She had set out to explore the relationship between the Programme for International Student Assessment (PISA) of the Organisation for Economic Co-operation and Development (OECD) and Australia's education policy, but early interview data forced her to reconsider her research question. The plethora of available models of policy did not satisfactorily accommodate her growing understanding of the messiness and complexity of policy work. On the basis of interviews with 18 policy actors, including former OECD officials, PISA analysts and bureaucrats, as well as documentary analysis of government reports and ministerial media releases, she suggests that the concept of ‘assemblage’ provides the tools to better understand the messy processes of policy work. The relationship between PISA and national policy is of interest to many scholars in Europe, making this study widely relevant. An article that argues for the unsettling of tidy accounts of knowledge making in policy can hardly afford to obscure the untidiness of its own assemblage. Accordingly, this article is somewhat unconventional in its presentation, and attempts to take the reader into the messiness of the research world as well as the policy world. Implicit in this presentation is the suggestion that both policy work and research work are ongoing attempts to find order and coherence through the cobbling together of a variety of resources.


2019 ◽  
Vol 44 (6) ◽  
pp. 752-781
Author(s):  
Michael O. Martin ◽  
Ina V.S. Mullis

International large-scale assessments of student achievement such as International Association for the Evaluation of Educational Achievement’s Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study and Organization for Economic Cooperation and Development’s Program for International Student Assessment that have come to prominence over the past 25 years owe a great deal in methodological terms to pioneering work by National Assessment of Educational Progress (NAEP). Using TIMSS as an example, this article describes how a number of core techniques, such as matrix sampling, student population sampling, item response theory scaling with population modeling, and resampling methods for variance estimation, have been adapted and implemented in an international context and are fundamental to the international assessment effort. In addition to the methodological contributions of NAEP, this article illustrates how the large-scale international assessments go beyond measuring student achievement by representing important aspects of community, home, school, and classroom contexts in ways that can be used to address issues of importance to researchers and policymakers.


2020 ◽  
pp. 249-263
Author(s):  
Luisa Araújo ◽  
Patrícia Costa ◽  
Nuno Crato

AbstractThis chapter provides a short description of what the Programme for International Student Assessment (PISA) measures and how it measures it. First, it details the concepts associated with the measurement of student performance and the concepts associated with capturing student and school characteristics and explains how they compare with some other International Large-Scale Assessments (ILSA). Second, it provides information on the assessment of reading, the main domain in PISA 2018. Third, it provides information on the technical aspects of the measurements in PISA. Lastly, it offers specific examples of PISA 2018 cognitive items, corresponding domains (mathematics, science, and reading), and related performance levels.


2021 ◽  
Author(s):  
Alexander Robitzsch ◽  
Oliver Lüdtke

International large-scale assessments (LSAs) such as the Programme for International Student Assessment (PISA) provide important information about the distribution of student proficiencies across a wide range of countries. The repeated assessments of these content domains offer policymakers important information for evaluating educational reforms and received considerable attention from the media. Furthermore, the analytical strategies employed in LSAs often define methodological standards for applied researchers in the field. Hence, it is vital to critically reflect the conceptual foundations of analytical choices in LSA studies. This article discusses methodological challenges in selecting and specifying the scaling model used to obtain proficiency estimates from the individual student responses in LSA studies. We distinguish design-based inference from model-based inference. It is argued that for the official reporting of LSA results, design-based inference should be preferred because it allows for a clear definition of the target of inference (e.g., country mean achievement) and is less sensitive to specific modeling assumptions. More specifically, we discuss five analytical choices in the specification of the scaling model: (1) Specification of the functional form of item response functions, (2) the treatment of local dependencies and multidimensionality, (3) the consideration of test-taking behavior for estimating student ability, and the role of country differential items functioning (DIF) for (4) cross-country comparisons, and (5) trend estimation. This article's primary goal is to stimulate discussion about recently implemented changes and suggested refinements of the scaling models in LSA studies.


Methodology ◽  
2021 ◽  
Vol 17 (1) ◽  
pp. 22-38
Author(s):  
Jason C. Immekus

Within large-scale international studies, the utility of survey scores to yield meaningful comparative data hinges on the degree to which their item parameters demonstrate measurement invariance (MI) across compared groups (e.g., culture). To-date, methodological challenges have restricted the ability to test the measurement invariance of item parameters of these instruments in the presence of many groups (e.g., countries). This study compares multigroup confirmatory factor analysis (MGCFA) and alignment method to investigate the MI of the schoolwork-related anxiety survey across gender groups within the 35 Organisation for Economic Co-operation and Development (OECD) countries (gender × country) of the Programme for International Student Assessment 2015 study. Subsequently, the predictive validity of MGCFA and alignment-based factor scores for subsequent mathematics achievement are examined. Considerations related to invariance testing of noncognitive instruments with many groups are discussed.


Sign in / Sign up

Export Citation Format

Share Document