The Big Fish down under: Examining Moderators of the ‘Big-Fish-Little-Pond’ Effect for Australia's High Achievers

2011 ◽  
Vol 55 (2) ◽  
pp. 93-114 ◽  
Author(s):  
Marjorie Seaton ◽  
Herbert W. Marsh ◽  
Alexander Seeshing Yeung ◽  
Rhonda Craven

Big-fish-little-pond effect (BFLPE) research has demonstrated that academic self-concept is negatively affected by attending high-ability schools. This article examines data from large, representative samples of 15-year-olds from each Australian state, based on the three Program for International Student Assessment (PISA) databases that focus on different subject domains: reading (2000), mathematics (2003) and science (2006). The overarching research question is whether the size or direction of the BFLPE is moderated by any of a total of 67 moderators (for example ability, study methods, motive, social constructs and Australian states) that were considered. The data showed consistent support for the BFLPE across all Australian states for all three databases. None of the constructs examined moderated the BFLPE and this finding was consistent across states. In conclusion, the BFLPE is remarkably robust in Australia and the study findings generalised well across Australian states and across all moderators investigated.

2011 ◽  
Vol 10 (4) ◽  
pp. 611-622 ◽  
Author(s):  
Radhika Gorur

In this article, the author tells the story of her search for appropriate tools to conceptualise policy work. She had set out to explore the relationship between the Programme for International Student Assessment (PISA) of the Organisation for Economic Co-operation and Development (OECD) and Australia's education policy, but early interview data forced her to reconsider her research question. The plethora of available models of policy did not satisfactorily accommodate her growing understanding of the messiness and complexity of policy work. On the basis of interviews with 18 policy actors, including former OECD officials, PISA analysts and bureaucrats, as well as documentary analysis of government reports and ministerial media releases, she suggests that the concept of ‘assemblage’ provides the tools to better understand the messy processes of policy work. The relationship between PISA and national policy is of interest to many scholars in Europe, making this study widely relevant. An article that argues for the unsettling of tidy accounts of knowledge making in policy can hardly afford to obscure the untidiness of its own assemblage. Accordingly, this article is somewhat unconventional in its presentation, and attempts to take the reader into the messiness of the research world as well as the policy world. Implicit in this presentation is the suggestion that both policy work and research work are ongoing attempts to find order and coherence through the cobbling together of a variety of resources.


2020 ◽  
Vol 20 (1) ◽  
pp. 59-78
Author(s):  
Mohammed A. A. Abulela ◽  
Michael Harwell

Data analysis is a significant methodological component when conducting quantitative education studies. Guidelines for conducting data analyses in quantitative education studies are common but often underemphasize four important methodological components impacting the validity of inferences: quality of constructed measures, proper handling of missing data, proper level of measurement of a dependent variable, and model checking. This paper highlights these components for novice researchers to help ensure statistical inferences are valid. We used empirical examples involving contingency tables, group comparisons, regression analysis, and multilevel modelling to illustrate these components using the Program for International Student Assessment (PISA) data. For every example, we stated a research question and provided evidence related to the quality of constructed measures since measures with weak reliability and validity evidence can bias estimates and distort inferences. The adequate strategies for handling missing data were also illustrated. The level of measurement for the dependent variable was assessed and the proper statistical technique was utilized accordingly. Model residuals were checked for normality and homogeneity of variance. Recommendations for obtaining stronger inferences and reporting related evidence were also illustrated. This work provides an important methodological resource for novice researchers conducting data analyses by promoting improved practice and stronger inferences.


2010 ◽  
Vol 106 (1) ◽  
pp. 49-53 ◽  
Author(s):  
Robert M. Capraro ◽  
Mary Margaret Capraro ◽  
Z. Ebrar Yetkiner ◽  
Serkan Özel ◽  
Hae Gyu Kim ◽  
...  

This study extends the scope of international comparisons examining students' conceptions of the equal sign. Specifically, Korean ( n = 193) and Turkish ( n = 334) Grade 6 students were examined to assess whether their conceptions and responses were similar to prior findings published for Chinese and U.S. students and to hypothesize relationships about problem types and conceptual understanding of the equal sign. About 59.6% of the Korean participants correctly answered all items providing conceptually accurate solutions, as compared to 28.4% of the Turkish sample. Comparison with previous studies in China and the USA indicated that the Chinese sample outperformed those from other nations, followed by Korea, Turkey, and the USA. In large-scale international studies such as Trends in International Mathematics and Science (TIMSS) and the Programme for International Student Assessment (PISA), students from China and Korea have been among the high achievers.


2019 ◽  
Vol 24 (3) ◽  
pp. 231-242 ◽  
Author(s):  
Herbert W. Marsh ◽  
Philip D. Parker ◽  
Reinhard Pekrun

Abstract. We simultaneously resolve three paradoxes in academic self-concept research with a single unifying meta-theoretical model based on frame-of-reference effects across 68 countries, 18,292 schools, and 485,490 15-year-old students. Paradoxically, but consistent with predictions, effects on math self-concepts were negative for: • being from countries where country-average achievement was high; explaining the paradoxical cross-cultural self-concept effect; • attending schools where school-average achievement was high; demonstrating big-fish-little-pond-effects (BFLPE) that generalized over 68 countries, Organisation for Economic Co-operation and Development (OECD)/non-OECD countries, high/low achieving schools, and high/low achieving students; • year-in-school relative to age; unifying different research literatures for associated negative effects for starting school at a younger age and acceleration/skipping grades, and positive effects for starting school at an older age (“academic red shirting”) and, paradoxically, even for repeating a grade. Contextual effects matter, resulting in significant and meaningful effects on self-beliefs, not only at the student (year in school) and local school level (BFLPE), but remarkably even at the macro-contextual country-level. Finally, we juxtapose cross-cultural generalizability based on Programme for International Student Assessment (PISA) data used here with generalizability based on meta-analyses, arguing that although the two approaches are similar in many ways, the generalizability shown here is stronger in terms of support for the universality of the frame-of-reference effects.


Methodology ◽  
2007 ◽  
Vol 3 (4) ◽  
pp. 149-159 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Ulrich Trautwein ◽  
Frauke Kreuter ◽  
Jan Marten Ihme

Abstract. In large-scale educational assessments such as the Third International Mathematics and Sciences Study (TIMSS) or the Program for International Student Assessment (PISA), sizeable numbers of test administrators (TAs) are needed to conduct the assessment sessions in the participating schools. TA training sessions are run and administration manuals are compiled with the aim of ensuring standardized, comparable, assessment situations in all student groups. To date, however, there has been no empirical investigation of the effectiveness of these standardizing efforts. In the present article, we probe for systematic TA effects on mathematics achievement and sample attrition in a student achievement study. Multilevel analyses for cross-classified data using Markov Chain Monte Carlo (MCMC) procedures were performed to separate the variance that can be attributed to differences between schools from the variance associated with TAs. After controlling for school effects, only a very small, nonsignificant proportion of the variance in mathematics scores and response behavior was attributable to the TAs (< 1%). We discuss practical implications of these findings for the deployment of TAs in educational assessments.


Author(s):  
Erika Anne Leicht

Despite their stated intention of providing equal educational opportunity for all, many democratic countries separate their students into different classes or even different schools based on their demonstrated academic ability and likely future career. This practice is often referred to as “tracking or “ability grouping.” This study aims to determine whether different types of educational tracking have different effects on students’ academic achievement. Specifically, this study investigates whether disparities in educational achievement between students of highly educated versus minimally educated parents are greater in countries that practice more explicit and complete forms of tracking. It also explores tracking’s effects on average achievement and overall achievement variance. Analysis of data from the 2009 Programme for International Student Assessment (PISA) indicates that tracking generally does increase score disparities between children from different educational backgrounds. Tracking is also associated with higher overall variance of scores. At the same time, tracking may have a slight positive effect on average achievement. However, results are not consistent across all countries, and patterns are different in different subject areas and for different types of tracking. The results of this study neither condemn nor extol tracking. Rather, they indicate that tracking plays a relatively minor role in determining the quality and equity of an education system.


2020 ◽  
Vol 12 (2) ◽  
pp. 104
Author(s):  
Siti Hannah Padliyyah

Indonesia is ranked 56th out of 65 participating countries in the Program for International Student Assessment (PISA) based on data 2015. According to PISA results, the average science score of Indonesian students is 403, where this number is categorized as low. This is because students are still in the process of understanding and have not yet fully recognized the location of their mistakes. Students can diagnose the location of their mistakes through self-diagnosis activities. Self-diagnosis activities require the active role of students during the learning process. One approach that can increase the active role of students is STEM (Science Technology Engineering Mathematics). However, research at this time is still rarely found self-diagnosis activities that are applied to the STEM approach. Therefore, this research has the aim to find out the increase in mastery of physical concepts and self-diagnosis of students on the STEM learning approach to the theory of poscal law class XI High School.This study uses a One-Group pretest-posttest design with a sample of 30 ini 11th grade highschool from one schools in Bandung. . Based on the findings, there is an increase in mastery of concepts [<g> = 0.51] from pre-test to post-test. In self-diagnosis activities identified that there are differences in scores [z = 1.75; p = 0.9599] student assessment results of researchers and self-scoring results. Deeper self-diagnosis triggers a series of implicit steps that encourage them to rearrange their cognition by correcting the mistakes they make when solving problems. So that learning activities using the STEM approach that involves self-diagnosis activities can improve students' mastery of concepts.


Sign in / Sign up

Export Citation Format

Share Document