scholarly journals Enhancing Scientific Literacy in Thailand

2013 ◽  
Vol 3 (1) ◽  
pp. 86-98 ◽  
Author(s):  
Chokchai Yuenyong

Globally, literacy in science has become a fundamental focus of public education. The term ‘scientific literacy’, however, attracts a diversity of views. A common theme in the literature is that it relates to being able to critique scientific discussions. The large-scale evaluation of students' scientific literacy was assessed in the Programme for International Student Assessment (PISA). Thailand participated in the PISA scheme every three years from 2000 to 2009. The results indicate that Thailand's performance decreased over the period and were below average. This has led to an increased focus on improving performance and scores and a desire to improve science education and science teaching for enhancing scientific literacy in Thailand. Science educators in universities, the Institute for the Promotion of Teaching Science and Technology (IPST), the Ministry of Education and others have organized various projects and research to improve scientific literacy. Research projects have focused on developing teachers' quality in science teaching and have gone on to consider the nature of science, contexts of science, socio-scientific issues and the relation between science, technology and society. The article will discuss the strategies that have been embarked upon to enhance scientific literacy in Thailand.

2017 ◽  
Vol 28 (68) ◽  
pp. 478
Author(s):  
Andrea Mara Vieira

<p>A nossa proposta é investigar a existência ou não de sintonia entre o conceito acadêmico de letramento científico e aquele previsto nos documentos do Programme for International Student Assessment (PISA) e nas normas educacionais. A despeito de toda complexidade e polissemia conceitual existente em torno do conceito de alfabetização/letramento científico, desenvolvemos uma análise teórico-comparativa desse conceito na forma como é concebido pelos especialistas, em comparação com o conceito de letramento científico previsto na base avaliativa do PISA 2015, considerando também a previsão normatizada pelas políticas públicas educacionais. Ao final, identificamos  menos  acordes  e, por variados motivos, mais dissonâncias, que podem servir como contributo para uma reflexão sobre a validade e  relevância  do PISA enquanto instrumento de avaliação, bem como sobre o tipo de aprendizagem a ser assegurada pelo nosso sistema educacional.</p><p><strong>Palavras-chave:</strong> Letramento Científico; Pisa; Políticas Públicas; Avaliação em Larga Escala.</p><p> </p><p><strong>Acordes y disonancias del letramento científico propuesto por el PISA 2015</strong></p><p>Nuestra propuesta es investigar la existencia o no de sintonía entre el concepto académico de letramento científico y el previsto en los documentos del Programme for International Student Assessment (PISA) y en las normas educacionales. A pesar de toda la complejidad y polisemia conceptual existentes en torno al concepto de alfabetización/letramento científico, desarrollamos un análisis teórico-comparativo de dicho concepto en la forma como es concebido por los especialistas, en comparación con el concepto de letramento científico previsto en la base evaluativa del PISA 2015, considerando también la previsión normalizada por las políticas públicas educacionales. Al final, identificamos menos acordes y, por variados motivos, más disonancias, que pueden servir como contribución para una reflexión sobre la validad y relevancia del PISA como instrumento de evaluación, así como sobre el tipo de aprendizaje que nuestro sistema educacional debe asegurar.</p><p><strong>Palabras-clave:</strong> Letramento Científico; Pisa; Políticas Públicas; Evaluación en Gran Escala.</p><p> </p><p><strong>Chords and dissonances of scientific literacy proposed by PISA 2015</strong></p><p>Our proposal is to investigate the harmony or lack of it between the academic concept of scientific literacy and the one stated in the documents of the Program for International Student Assessment (PISA) and in educational standards. Despite all complexity and conceptual polysemy around the concept of literacy/scientific literacy, we developed a theoretical comparative analysis of this concept as designed by experts, comparing it to the concept of scientific literacy laid down on the assessment basis of the PISA 2015, considering also the projection standardized by public educational policies. Finally, we identified less chords, and, for various reasons, more dissonance, that can serve as a contribution to discuss the validity and relevance of PISA as an assessment tool, as well as on the type of learning to be ensured by our educational system.</p><p><strong>Keywords:</strong> Scientific Literacy; Pisa; Public Policies; Large-Scale Assessment.</p>


Methodology ◽  
2007 ◽  
Vol 3 (4) ◽  
pp. 149-159 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Ulrich Trautwein ◽  
Frauke Kreuter ◽  
Jan Marten Ihme

Abstract. In large-scale educational assessments such as the Third International Mathematics and Sciences Study (TIMSS) or the Program for International Student Assessment (PISA), sizeable numbers of test administrators (TAs) are needed to conduct the assessment sessions in the participating schools. TA training sessions are run and administration manuals are compiled with the aim of ensuring standardized, comparable, assessment situations in all student groups. To date, however, there has been no empirical investigation of the effectiveness of these standardizing efforts. In the present article, we probe for systematic TA effects on mathematics achievement and sample attrition in a student achievement study. Multilevel analyses for cross-classified data using Markov Chain Monte Carlo (MCMC) procedures were performed to separate the variance that can be attributed to differences between schools from the variance associated with TAs. After controlling for school effects, only a very small, nonsignificant proportion of the variance in mathematics scores and response behavior was attributable to the TAs (< 1%). We discuss practical implications of these findings for the deployment of TAs in educational assessments.


2020 ◽  
Vol 12 (2) ◽  
pp. 104
Author(s):  
Siti Hannah Padliyyah

Indonesia is ranked 56th out of 65 participating countries in the Program for International Student Assessment (PISA) based on data 2015. According to PISA results, the average science score of Indonesian students is 403, where this number is categorized as low. This is because students are still in the process of understanding and have not yet fully recognized the location of their mistakes. Students can diagnose the location of their mistakes through self-diagnosis activities. Self-diagnosis activities require the active role of students during the learning process. One approach that can increase the active role of students is STEM (Science Technology Engineering Mathematics). However, research at this time is still rarely found self-diagnosis activities that are applied to the STEM approach. Therefore, this research has the aim to find out the increase in mastery of physical concepts and self-diagnosis of students on the STEM learning approach to the theory of poscal law class XI High School.This study uses a One-Group pretest-posttest design with a sample of 30 ini 11th grade highschool from one schools in Bandung. . Based on the findings, there is an increase in mastery of concepts [<g> = 0.51] from pre-test to post-test. In self-diagnosis activities identified that there are differences in scores [z = 1.75; p = 0.9599] student assessment results of researchers and self-scoring results. Deeper self-diagnosis triggers a series of implicit steps that encourage them to rearrange their cognition by correcting the mistakes they make when solving problems. So that learning activities using the STEM approach that involves self-diagnosis activities can improve students' mastery of concepts.


2021 ◽  
Vol 33 (1) ◽  
pp. 139-167
Author(s):  
Andrés Strello ◽  
Rolf Strietholt ◽  
Isa Steinmann ◽  
Charlotte Siepmann

AbstractResearch to date on the effects of between-school tracking on inequalities in achievement and on performance has been inconclusive. A possible explanation is that different studies used different data, focused on different domains, and employed different measures of inequality. To address this issue, we used all accumulated data collected in the three largest international assessments—PISA (Programme for International Student Assessment), PIRLS (Progress in International Reading Literacy Study), and TIMSS (Trends in International Mathematics and Science Study)—in the past 20 years in 75 countries and regions. Following the seminal paper by Hanushek and Wößmann (2006), we combined data from a total of 21 cycles of primary and secondary school assessments to estimate difference-in-differences models for different outcome measures. We synthesized the effects using a meta-analytical approach and found strong evidence that tracking increased social achievement gaps, that it had smaller but still significant effects on dispersion inequalities, and that it had rather weak effects on educational inadequacies. In contrast, we did not find evidence that tracking increased performance levels. Besides these substantive findings, our study illustrated that the effect estimates varied considerably across the datasets used because the low number of countries as the units of analysis was a natural limitation. This finding casts doubt on the reproducibility of findings based on single international datasets and suggests that researchers should use different data sources to replicate analyses.


2018 ◽  
Vol 1 (3) ◽  
pp. 69-95 ◽  
Author(s):  
Xiangyi Liao ◽  
Xiaoting Huang

Purpose In recent years, private tutoring has become increasingly prevalent in China and has become both a dominant way for students to learn after school and a major component of family educational expenditure. This paper aims to analyze the factors that affect Chinese students’ participation in private tutoring and the effectiveness of private tutoring. Design/Approach/Methods We use data from the Programme for International Student Assessment (PISA) 2015 of Mainland China area and focus specifically on science-related private tutoring. Multilevel logistic model and hierarchical linear model based on coarsened exact matching (CEM) are used to conduct the investigations. Findings Empirical results show that individual level factors including student's interest in science, educational expectations, and school-level factors such as school autonomy, science-related learning resources and school size pose a significant influence on the likelihood of participation in private tutoring. Moreover, science-related private tutoring has not significantly improved the overall scientific literacy scores of students. In addition, private tutoring has widened the performance gap among students from different socioeconomic backgrounds, with students from socioeconomically advantaged family experiencing more significant gains from tutoring. Originality/Value These findings suggest that providing free high-quality tutoring to students from disadvantaged families might be an effective way of promoting educational equity.


2019 ◽  
Vol 44 (6) ◽  
pp. 752-781
Author(s):  
Michael O. Martin ◽  
Ina V.S. Mullis

International large-scale assessments of student achievement such as International Association for the Evaluation of Educational Achievement’s Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study and Organization for Economic Cooperation and Development’s Program for International Student Assessment that have come to prominence over the past 25 years owe a great deal in methodological terms to pioneering work by National Assessment of Educational Progress (NAEP). Using TIMSS as an example, this article describes how a number of core techniques, such as matrix sampling, student population sampling, item response theory scaling with population modeling, and resampling methods for variance estimation, have been adapted and implemented in an international context and are fundamental to the international assessment effort. In addition to the methodological contributions of NAEP, this article illustrates how the large-scale international assessments go beyond measuring student achievement by representing important aspects of community, home, school, and classroom contexts in ways that can be used to address issues of importance to researchers and policymakers.


2020 ◽  
pp. 249-263
Author(s):  
Luisa Araújo ◽  
Patrícia Costa ◽  
Nuno Crato

AbstractThis chapter provides a short description of what the Programme for International Student Assessment (PISA) measures and how it measures it. First, it details the concepts associated with the measurement of student performance and the concepts associated with capturing student and school characteristics and explains how they compare with some other International Large-Scale Assessments (ILSA). Second, it provides information on the assessment of reading, the main domain in PISA 2018. Third, it provides information on the technical aspects of the measurements in PISA. Lastly, it offers specific examples of PISA 2018 cognitive items, corresponding domains (mathematics, science, and reading), and related performance levels.


2021 ◽  
Author(s):  
Alexander Robitzsch ◽  
Oliver Lüdtke

International large-scale assessments (LSAs) such as the Programme for International Student Assessment (PISA) provide important information about the distribution of student proficiencies across a wide range of countries. The repeated assessments of these content domains offer policymakers important information for evaluating educational reforms and received considerable attention from the media. Furthermore, the analytical strategies employed in LSAs often define methodological standards for applied researchers in the field. Hence, it is vital to critically reflect the conceptual foundations of analytical choices in LSA studies. This article discusses methodological challenges in selecting and specifying the scaling model used to obtain proficiency estimates from the individual student responses in LSA studies. We distinguish design-based inference from model-based inference. It is argued that for the official reporting of LSA results, design-based inference should be preferred because it allows for a clear definition of the target of inference (e.g., country mean achievement) and is less sensitive to specific modeling assumptions. More specifically, we discuss five analytical choices in the specification of the scaling model: (1) Specification of the functional form of item response functions, (2) the treatment of local dependencies and multidimensionality, (3) the consideration of test-taking behavior for estimating student ability, and the role of country differential items functioning (DIF) for (4) cross-country comparisons, and (5) trend estimation. This article's primary goal is to stimulate discussion about recently implemented changes and suggested refinements of the scaling models in LSA studies.


Methodology ◽  
2021 ◽  
Vol 17 (1) ◽  
pp. 22-38
Author(s):  
Jason C. Immekus

Within large-scale international studies, the utility of survey scores to yield meaningful comparative data hinges on the degree to which their item parameters demonstrate measurement invariance (MI) across compared groups (e.g., culture). To-date, methodological challenges have restricted the ability to test the measurement invariance of item parameters of these instruments in the presence of many groups (e.g., countries). This study compares multigroup confirmatory factor analysis (MGCFA) and alignment method to investigate the MI of the schoolwork-related anxiety survey across gender groups within the 35 Organisation for Economic Co-operation and Development (OECD) countries (gender × country) of the Programme for International Student Assessment 2015 study. Subsequently, the predictive validity of MGCFA and alignment-based factor scores for subsequent mathematics achievement are examined. Considerations related to invariance testing of noncognitive instruments with many groups are discussed.


2020 ◽  
Vol 31 (77) ◽  
pp. 393
Author(s):  
Andriele Ferreira Muri Leite ◽  
Alicia Maria Catalano de Bonamino

<p>O artigo analisa a preparação científica de estudantes brasileiros participantes do Pisa (Programme for International Student Assessment – em português Programa Internacional de Avaliação dos Estudantes), considerando a defasagem idade-série. Foram realizadas uma análise exploratória dos resultados e uma regressão linear para investigar o efeito da variável repetência sobre o desempenho em ciências dos estudantes brasileiros. O estudo mostra que: os estudantes brasileiros estão em desvantagem em relação aos estudantes dos países da Organização para Cooperação e Desenvolvimento Econômico (OCDE); a maioria dos estudantes brasileiros não é capaz de realizar as tarefas mais simples estabelecidas pelo Pisa; a diferença entre estudantes brasileiros defasados e estudantes da OCDE alcança 150 pontos em algumas competências; apenas os estudantes brasileiros das séries finais do ensino médio atingem os níveis esperados pelo Pisa.</p><p><strong>Palavras-chave: </strong>Pisa, Brasil, Letramento Científico, Defasagem Idade-série.</p><p> </p><p><strong>Distorsión de grado y edad y la competencia científica en Pisa</strong></p><p>El artículo analiza la preparación científica de los estudiantes brasileños que participan en el PISA (Programme for International Student Assessment – en español, Programa Internacional de Evaluación de Estudiantes), teniendo en cuenta la distorsión de grado y edad. Se realizó un análisis exploratorio de los resultados y una regresión lineal para investigar el efecto de la variable de repetición en el rendimiento en ciencias de los estudiantes brasileños. El estudio muestra que: los estudiantes brasileños están en desventaja en comparación con los estudiantes de los países de la Organización para Cooperación y Desarrollo Económico (OCDE); la mayoría de los estudiantes brasileños no puede realizar las tareas más simples establecidas por el PISA; la diferencia entre estudiantes brasileños que presentan distorsión de grado y edad y estudiantes de la OCDE alcanza a 150 puntos en algunas competencias; solo los estudiantes brasileños en los años finales de la escuela secundaria alcanzan los niveles esperados por el PISA.</p><p><strong>Palabras clave: </strong>Pisa, Brasil, Competencia Científica, Distorsión de Grado y Edad.</p><p> </p><p><strong>Age-grade distortion and scientific literacy in Pisa</strong></p><p>The article analyzes the scientific preparation of Brazilian students participating in PISA (Programme for International Student Assessment), taking into account the age-grade distortion. An exploratory analysis of the results and a linear regression were carried out to investigate the effect of the grade repetition variable on Brazilian students’ performance in Science. The study shows that: Brazilian students are at a disadvantage compared to students from Organization for Economic Cooperation and Development (OECD) countries; the majority of Brazilian students are not able to perform the simplest tasks defined by PISA; the difference between Brazilian over-age students and OECD students reaches 150 points in some competencies; only Brazilian students in the final grades of secondary education reach the levels expected by PISA.</p><p><strong>Keywords: </strong>Pisa, Brazil, Scientific Literacy, Age-grade Distortion.</p>


Sign in / Sign up

Export Citation Format

Share Document