scholarly journals Melhores práticas e performance de escolas municipais do ensino fundamental no Ceará: Análise da eficiência na gestão escolar

2021 ◽  
Vol 29 ◽  
pp. 47
Author(s):  
Felipe Furlan Soriano ◽  
Alexandre Pereira Salgado Junior ◽  
Juliana Chiaretti Novi ◽  
Diogo Furlan Soriano ◽  
Perla Calil Pongeluppe Wadhy Rebehy

In Brazil, there is a concern with the quality of education, especially when analyzing the results of large-scale evaluations both at the national level, by the Basic Education Development Index (Ideb), and at the international level, by the Program for International Student Assessment (PISA). Thus, as a way of contributing to this issue, this study aimed to identify the best practices that can help improve the performance of Brazilian municipal elementary schools, considered of low socioeconomic level (NSE) in Ideb. The method used was quali-quantitative, where mathematical models were used, such as Data Envelopment Analysis (DEA), Quintile Analysis, Logistic Regression and case studies. As a result, 14 best practices were identified that may have contributed to the performance of schools considered efficient. It is hoped that the study can contribute to the enrichment of research in the area, in addition to the financial investment decision process, allocation of public resources and educational policies, through an efficient school management that aims to improve the quality of education in Brazil. 

2018 ◽  
Vol 21 ◽  
pp. 80-90
Author(s):  
Chandra Mani Paudel ◽  
Ram Chandra Panday

This paper tries to present results from a systematic review of literature that reviewed the large-scale assessments finding in the South Asian context especially focusing Nepal. The main objective of the LEAP programme is to reform the quality of learning in the Asia-Pacific region by developing capacity of the Member States to collect, analyze and utilize international and national assessment data identifying learning enablers. The review has identified the high order skills overshadowed by rote learning. It has also employed Item Response Theory (IRT) making assessments comparable and connected with the previous levels. International Assessments such as the Programme for International Student Assessment (PISA) and the Trends in Mathematics and Science Study (TIMSS) collected vast amounts of data on schools, students and households. The use of education-related “big data” for evidence-based policy making is limited, partly due to insufficient institutional capacity of countries to analyze such data and link results with policies.


Author(s):  
Syarifah Rita Zahara ◽  
Muliani Muliani ◽  
Wilda Rahmina ◽  
Siska Mauritha

In the education quality survey issued by the PISA (Program for International Student Assessment), Indonesia is ranked 72nd out of 77 countries. The Teacher Competency Test (UKG) results in Indonesia are still low and still far from being targeted by the government, especially in Aceh province Aceh at the national level is still heartbreaking by being the second lowest in Indonesia. Seeing the problems faced by education in Indonesia, to improve the quality of education, the quality of teachers must be improved first, one of which is by increasing the pedagogical competence of teachers. This study aims to know the pedagogical knowledge of physics teachers in SMA Negeri Lhokseumawe which includes aspects of learning planning, aspects of learning methods and aspects of learning evaluation. The research was conducted in the form of a survey with the population of this study was to take the entire physics subject teachers at SMA Negeri Lhokseumawe, totaling 7 (seven) schools with 28 physics teachers. The samples in this study were 3 schools that were taken by simple random sampling, namely 8 physics teachers at SMA Negeri 2 Lhokseumawe, 3 physics teachers at SMA Negeri 5 Lhokseumawe, and SMA Negeri 7 Lhokseumawe with 2 physics teachers. The results showed that the physics teacher's knowledge on the learning planning aspect was in the category of having good knowledge, meanwhile the physics teacher's knowledge on the learning method aspect already had a fairly good knowledge, while the physics teacher's knowledge on the learning evaluation aspect also had fairly good knowledge. So it can be concluded that physics teachers in the pedagogical aspect are in a fairly good category, meaning that most physics teachers already have a pretty good knowledge of pedagogical aspects


2021 ◽  
Vol 13 (11) ◽  
pp. 6437
Author(s):  
Łukasz Goczek ◽  
Ewa Witkowska ◽  
Bartosz Witkowski

In a seminal article, Hanushek and Woessmann explained economic growth as a function of the quality of education. While they did not find evidence of the importance of years of schooling, they argued for the relevance of cognitive skills and a basic literacy ratio for economic growth. However, this result was based on cross-country data limited to 23 observations. In this study, we extended and modified their approach based on the results of PISA (Programme for International Student Assessment) tests to explain the GDP changes over the last 50 years. Using panel data, we considered the possible lag that characterizes this relationship, used statistical methods to address the risk of reversed causality of economic performance affecting the quality of education, and extended the model by the inclusion of other potential growth factors. The results, which also included several robustness checks, confirmed the relevance of earlier education quality as a significant growth factor. Our results suggest the significance of educational skills for GDP growth, which might be treated as a confirmation of the importance of quality primary and secondary education for economic development. We showed that our results are robust to changes in the order of lags and confirmed the validity of the conclusion with the use of specification-robust Bayesian model averaging.


Methodology ◽  
2007 ◽  
Vol 3 (4) ◽  
pp. 149-159 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Ulrich Trautwein ◽  
Frauke Kreuter ◽  
Jan Marten Ihme

Abstract. In large-scale educational assessments such as the Third International Mathematics and Sciences Study (TIMSS) or the Program for International Student Assessment (PISA), sizeable numbers of test administrators (TAs) are needed to conduct the assessment sessions in the participating schools. TA training sessions are run and administration manuals are compiled with the aim of ensuring standardized, comparable, assessment situations in all student groups. To date, however, there has been no empirical investigation of the effectiveness of these standardizing efforts. In the present article, we probe for systematic TA effects on mathematics achievement and sample attrition in a student achievement study. Multilevel analyses for cross-classified data using Markov Chain Monte Carlo (MCMC) procedures were performed to separate the variance that can be attributed to differences between schools from the variance associated with TAs. After controlling for school effects, only a very small, nonsignificant proportion of the variance in mathematics scores and response behavior was attributable to the TAs (< 1%). We discuss practical implications of these findings for the deployment of TAs in educational assessments.


2021 ◽  
Vol 33 (1) ◽  
pp. 139-167
Author(s):  
Andrés Strello ◽  
Rolf Strietholt ◽  
Isa Steinmann ◽  
Charlotte Siepmann

AbstractResearch to date on the effects of between-school tracking on inequalities in achievement and on performance has been inconclusive. A possible explanation is that different studies used different data, focused on different domains, and employed different measures of inequality. To address this issue, we used all accumulated data collected in the three largest international assessments—PISA (Programme for International Student Assessment), PIRLS (Progress in International Reading Literacy Study), and TIMSS (Trends in International Mathematics and Science Study)—in the past 20 years in 75 countries and regions. Following the seminal paper by Hanushek and Wößmann (2006), we combined data from a total of 21 cycles of primary and secondary school assessments to estimate difference-in-differences models for different outcome measures. We synthesized the effects using a meta-analytical approach and found strong evidence that tracking increased social achievement gaps, that it had smaller but still significant effects on dispersion inequalities, and that it had rather weak effects on educational inadequacies. In contrast, we did not find evidence that tracking increased performance levels. Besides these substantive findings, our study illustrated that the effect estimates varied considerably across the datasets used because the low number of countries as the units of analysis was a natural limitation. This finding casts doubt on the reproducibility of findings based on single international datasets and suggests that researchers should use different data sources to replicate analyses.


2018 ◽  
Vol 49 (2) ◽  
pp. 199-222 ◽  
Author(s):  
Namrita Bendapudi ◽  
Siran Zhan ◽  
Ying-yi Hong

The present study contributes to innovation research by distinguishing between national innovation in the knowledge and technology domain (knowledge and technology output) versus that in the creative industries (creative output), and examining how these two types of innovation would benefit from high-quality basic education in different cultural contexts. We argue that because creative output requires symbolic knowledge (i.e., negotiation of new meanings), it would benefit from a national context that has not only high-quality basic education but also favorable cultural values (low self-protective values or high self-expansion values). By contrast, knowledge and technology output requires analytic and synthetic knowledge mainly and thus would benefit from high-quality basic education regardless of cultural values. To test these ideas, we performed regression analyses using three archival datasets (the Programme for International Student Assessment [PISA], the Schwartz Value Survey, and the Global Innovation Index) of 32 nations. The results in general supported our predictions such that a high level of self-protective values dampens the positive relationship between quality of basic education and creative output only, but not knowledge and technology output. Implications of these findings were discussed.


2019 ◽  
Vol 44 (6) ◽  
pp. 752-781
Author(s):  
Michael O. Martin ◽  
Ina V.S. Mullis

International large-scale assessments of student achievement such as International Association for the Evaluation of Educational Achievement’s Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study and Organization for Economic Cooperation and Development’s Program for International Student Assessment that have come to prominence over the past 25 years owe a great deal in methodological terms to pioneering work by National Assessment of Educational Progress (NAEP). Using TIMSS as an example, this article describes how a number of core techniques, such as matrix sampling, student population sampling, item response theory scaling with population modeling, and resampling methods for variance estimation, have been adapted and implemented in an international context and are fundamental to the international assessment effort. In addition to the methodological contributions of NAEP, this article illustrates how the large-scale international assessments go beyond measuring student achievement by representing important aspects of community, home, school, and classroom contexts in ways that can be used to address issues of importance to researchers and policymakers.


2020 ◽  
pp. 249-263
Author(s):  
Luisa Araújo ◽  
Patrícia Costa ◽  
Nuno Crato

AbstractThis chapter provides a short description of what the Programme for International Student Assessment (PISA) measures and how it measures it. First, it details the concepts associated with the measurement of student performance and the concepts associated with capturing student and school characteristics and explains how they compare with some other International Large-Scale Assessments (ILSA). Second, it provides information on the assessment of reading, the main domain in PISA 2018. Third, it provides information on the technical aspects of the measurements in PISA. Lastly, it offers specific examples of PISA 2018 cognitive items, corresponding domains (mathematics, science, and reading), and related performance levels.


2021 ◽  
Author(s):  
Alexander Robitzsch ◽  
Oliver Lüdtke

International large-scale assessments (LSAs) such as the Programme for International Student Assessment (PISA) provide important information about the distribution of student proficiencies across a wide range of countries. The repeated assessments of these content domains offer policymakers important information for evaluating educational reforms and received considerable attention from the media. Furthermore, the analytical strategies employed in LSAs often define methodological standards for applied researchers in the field. Hence, it is vital to critically reflect the conceptual foundations of analytical choices in LSA studies. This article discusses methodological challenges in selecting and specifying the scaling model used to obtain proficiency estimates from the individual student responses in LSA studies. We distinguish design-based inference from model-based inference. It is argued that for the official reporting of LSA results, design-based inference should be preferred because it allows for a clear definition of the target of inference (e.g., country mean achievement) and is less sensitive to specific modeling assumptions. More specifically, we discuss five analytical choices in the specification of the scaling model: (1) Specification of the functional form of item response functions, (2) the treatment of local dependencies and multidimensionality, (3) the consideration of test-taking behavior for estimating student ability, and the role of country differential items functioning (DIF) for (4) cross-country comparisons, and (5) trend estimation. This article's primary goal is to stimulate discussion about recently implemented changes and suggested refinements of the scaling models in LSA studies.


Methodology ◽  
2021 ◽  
Vol 17 (1) ◽  
pp. 22-38
Author(s):  
Jason C. Immekus

Within large-scale international studies, the utility of survey scores to yield meaningful comparative data hinges on the degree to which their item parameters demonstrate measurement invariance (MI) across compared groups (e.g., culture). To-date, methodological challenges have restricted the ability to test the measurement invariance of item parameters of these instruments in the presence of many groups (e.g., countries). This study compares multigroup confirmatory factor analysis (MGCFA) and alignment method to investigate the MI of the schoolwork-related anxiety survey across gender groups within the 35 Organisation for Economic Co-operation and Development (OECD) countries (gender × country) of the Programme for International Student Assessment 2015 study. Subsequently, the predictive validity of MGCFA and alignment-based factor scores for subsequent mathematics achievement are examined. Considerations related to invariance testing of noncognitive instruments with many groups are discussed.


Sign in / Sign up

Export Citation Format

Share Document