Modelling the Influence of Individual and Spatial Factors Underlying Variations in the Levels of Secondary School Examination Results

1997 ◽  
Vol 29 (4) ◽  
pp. 641-658 ◽  
Author(s):  
M Coombes ◽  
S Raybould

There are several strands of research which have recently converged in the analysis of exam results. A long-standing line of enquiry has been to assess the impact of deprivation on educational performance. At the school level, recent work has concentrated on trying to assess the ‘added value’ of the education provided by a school, taking into account the abilities of the children on entry. The need to assess influences operating at different levels (notably, the individual and the school) has led the analysis of exam results to be a core area in which multilevel models (MLMs) are being developed. A key question which remains is whether conditions in the neighbourhood in which the child lives exert a separate influence over and above the individual's characteristics and those of the school. This paper is an examination of the value of different data sources for explanatory MLMs of 16 year olds' exam results in Newcastle upon Tyne. Several different types of data source are assessed in order to profile individuals, schools, and neighbourhoods. The process of linking the data sets is described, highlighting some problems which are inherent to the innovation here of drawing upon administrative data from several separate information systems. As a result of these and other limitations, the MLMs which are then developed are essentially exploratory. The aim is to indicate which of the data sources provide the variables which appear to have the most predictive power in these analyses. The results are interesting and intuitively reasonable and enable judgments to be made as to, for example, which data sources provide the more indicative measures of the effects of deprivation (which the MLMs show to be operating independently at two different levels, with schools and wards cross-classified at the higher level).

2019 ◽  
Vol 47 (2) ◽  
pp. 173-189
Author(s):  
Anil Duman

Purpose The recent increase in economic inequalities in many countries heightened the debates about policy preferences on income distribution. Attitudes toward inequality vary greatly across countries and numerous explanations are offered to clarify the factors leading to support for redistribution. The purpose of this paper is to examine the link between subjective social class and redistributive demands by jointly considering the individual and national factors. The author argues that subjective measures of social positions can be highly explanatory for preferences about redistribution policies. Design/methodology/approach The author uses data from 48 countries gathered by World Values Survey and empirically tests the impact of self-positioning into classes by multilevel ordered logit model. Several model specifications and estimation strategies have been employed to obtain consistent estimates and to check for the robustness of the results. Findings The findings show that, in addition to objective factors, subjective class status is highly explanatory for redistributive preferences across countries. The author also exhibits that there is interaction between self-ranking of social status and national context. The author’s estimations from the multilevel models verify that subjective social class has greater explanatory power in more equal societies. This is in contrast to the previous studies that establish a positive link between inequality and redistribution. Originality/value The paper contributes to the literature by introducing subjective social class as a determinant. Self-ranked positions can be very relieving about policy preferences given the information these categorizations encompass about individuals’ perceptions about their and others’ place in the society.


2016 ◽  
Vol 32 (3) ◽  
pp. 363-394 ◽  
Author(s):  
Claire Robertson-Kraft ◽  
Rosaline S. Zhang

A growing body of research examines the impact of recent teacher evaluation systems; however, we have limited knowledge on how these systems influence teacher retention. This study uses a mixed-methods design to examine teacher retention patterns during the pilot year of an evaluation system in an urban school district in Texas. We used difference-in-differences analysis to examine the impact of the new system on school-level teacher turnover and administered a teacher survey ( N = 1,301) to investigate individual and school-level factors influencing retention. This quantitative analysis was supplemented with interview data from two case study schools. Results suggest that, overall, the new evaluation system did not have a significant effect on teacher retention, but there was significant variation at the individual and school level. This study has important implications for policymakers developing new evaluation systems and researchers interested in evaluating their impact on retention.


2018 ◽  
Vol 36 (4) ◽  
pp. 1
Author(s):  
Thaís Machado Scherrer ◽  
George Sand França ◽  
Raimundo Silva ◽  
Daniel Brito de Freitas ◽  
Carlos da Silva Vilar

ABSTRACT. Following our own previous work, we reanalyze the nonextensive behavior over the circum-Pacific subduction zones evaluating the impact of using different types of magnitudes in the results. We used the same data source and time interval of our previous work, the NEIC catalog in the years between 2001 and 2010. Even considering different data sets, the correlation between q and the subduction zone asperity is perceptible, but the values found for the nonextensive parameter in the considered data sets presents an expressive variation. The data set with surface magnitude exhibits the best adjustments.Keywords: Nonextensivity, Seismicity, Solid Earth, Earthquake.RESUMO. No mesmo caminho do nosso trabalho anterior, reanalisamos o comportamento não extensivo sobre as zonas de subducção do círcuo de fogo do Pacífico, avaliando o impacto do uso de diferentes tipos de magnitude nos resultados. Utilizamos o mesmo intervalo de dados e fonte de nosso trabalho anterior, do catálogo NEIC entre os anos 2001 e 2010. Mesmo considerando diferentes conjuntos de dados, a correlação entre q e a aspereza das zonas de subducção é perceptível, mas os valores encontrados para o parâmetro não extensivo no conjuntos de dados considerados apresentam uma variação expressiva. O conjunto de dados com magnitude de superfície exibe os melhores ajustes.Palavras-chave: Não extensividade, Sismicidade, Terra Sólida, Terremotos.


2019 ◽  
Vol 37 (2) ◽  
pp. 254-279 ◽  
Author(s):  
Ikkyu Choi ◽  
Spiros Papageorgiou

Stakeholders of language tests are often interested in subscores. However, reporting a subscore is not always justified; a subscore should provide reliable and distinct information to be worth reporting. When a subscore is used for decisions across multiple levels (e.g., individual test takers and schools), it needs to be justified for its reliability and distinctiveness at every relevant level. In this study, we examined whether reporting seven Reading and Listening subscores of the TOEFL Primary® test, a standardized English proficiency test for young English as a foreign language learners, could be justified for reporting at individual and school levels. We analyzed data collected in pilot administrations, in which 4776 students from 51 schools participated. We employed the classical test theory (CTT) based approaches of Haberman (2008) and Haberman, Sinharay, and Puhan (2009) for the individual and school-level investigations, respectively. We also supplemented the CTT-based approaches with a factor analytic approach for the individual level analysis and a multilevel modeling approach for the school-level analysis. The results differed across the two levels: we found little support for reporting the subscores at the individual level, but strong evidence supporting the added-value of the school-level subscores when the sample size for each school exceeds 50.


Author(s):  
Stephen S. Leff ◽  
Tracy Evian Waasdorp ◽  
Krista R. Mehari

This chapter reviews school-based programming for its impact on relational aggression, relational victimization, and/or relational bullying: specifically, 14 programs with publications between 2010–2016 that were reviewed across key areas, including: (1) mode of operation; (2) targeted population and age range; (3) implementation factors; (4) primary strategies employed; (5) materials available to conduct the program; and (6) their impact on relevant target outcomes. Review of these programs highlighted certain factors important for future research related to relational aggression and bullying prevention programming, such as employing strong designs using random assignment taking into account the complexity of relational aggression at the individual, classroom, and school level whenever possible, and examining the impact of programming on the forms of aggression separately. Generalizability and implementation integrity need to be considered when designing and implementing programming. The field of relational aggression and bullying prevention programming has grown substantially over the past decade, but much remains to be done.


2020 ◽  
Author(s):  
Maximilian Graf ◽  
Christian Chwala ◽  
Julius Polz ◽  
Harald Kunstmann

<p>In recent years, so-called opportunistic sensors for measuring rainfall, are attracting more notice due to their broad availability and low financial effort for the scientific community. These sensors are existing devices or infrastructure, which were not intentionally built to measure rainfall, but can deliver rainfall information. One example of such an opportunistic measurement system are Commercial Microwave Links (CMLs), which provide part of the backbone of modern mobile communication. CMLs can deliver path-averaged rainfall information through the relation between rainfall and attenuation along their paths. Before such an opportunistic data source can be used, either as an individual or a merged data product, its performance compared to other rainfall products must be evaluated.</p><p>We discuss the selection of performance metrics, spatial and temporal aggregation and rainfall thresholds for the comparison between a German-wide CML network and a gauge-adjusted radar product provided by the German Weather Service. The CML data set consists of nearly 4000 CMLs with minutely readings from which we will present a year of data. </p><p>First, we show the influence of the temporal aggregation on the comparability. With higher resolution, the impact due to small temporal deviations increases. Second, CMLs represent path-averaged rainfall information, while the radar product is gridded. We discuss the choice whether the comparison should be performed on the point, line or grid scale. This choice depends on the desired future applications which already should be considered when selection evaluation tools. Third, the decision to exclude rain rates below a certain threshold or the calculation of performance metrics for certain intervals gives us a more detailed insight in the behavior of both rainfall data sets.</p>


2021 ◽  
Vol 10 (8) ◽  
pp. 528
Author(s):  
Raphael Witt ◽  
Lukas Loos ◽  
Alexander Zipf

OpenStreetMap (OSM) is a global mapping project which generates free geographical information through a community of volunteers. OSM is used in a variety of applications and for research purposes. However, it is also possible to import external data sets to OpenStreetMap. The opinions about these data imports are divergent among researchers and contributors, and the subject is constantly discussed. The question of whether importing data, especially large quantities, is adding value to OSM or compromising the progress of the project needs to be investigated more deeply. For this study, OSM’s historical data were used to compute metrics about the developments of the contributors and OSM data during large data imports which were for the Netherlands and India. Additionally, one time period per study area during which there was no large data import was investigated to compare results. For making statements about the impacts of large data imports in OSM, the metrics were analysed using different techniques (cross-correlation and changepoint detection). It was found that the contributor activity increased during large data imports. Additionally, contributors who were already active before a large import were more likely to contribute to OSM after said import than contributors who made their first contributions during the large data import. The results show the difficulty of interpreting a heterogeneous data source, such as OSM, and the complexity of the project. Limitations and challenges which were encountered are explained, and future directions for continuing in this field of research are given.


2006 ◽  
Vol 11 (6) ◽  
pp. 729-746 ◽  
Author(s):  
MARCIA A. ROSADO ◽  
MARIA A. CUNHA-E-SÁ ◽  
MARIA M. DUCLA-SOARES ◽  
LUIS C. NUNES

This paper estimates WTP for drinking water quality in Brazil by combining averting behavior with contingent valuation data. Using bivariate probit models, alternative structures allowing for heteroscedasticity between and within data sources are incorporated by taking advantage of the different information content that characterizes each data source. We look at two covariates not yet examined in the literature when combining stated and revealed preferred data to explain the variance in the models: income and the bid in the contingent valuation questionnaire. Tests for parameter equality across data sets are performed. The results suggest that the specification of heteroscedasticity has a significant impact in WTP estimates and is crucial to legitimate the combination of data sets from different origins. The significant differences found in WTP between the two sources are discussed.


2021 ◽  
Vol 14 (11) ◽  
pp. 2519-2532
Author(s):  
Fatemeh Nargesian ◽  
Abolfazl Asudeh ◽  
H. V. Jagadish

Data scientists often develop data sets for analysis by drawing upon sources of data available to them. A major challenge is to ensure that the data set used for analysis has an appropriate representation of relevant (demographic) groups: it meets desired distribution requirements. Whether data is collected through some experiment or obtained from some data provider, the data from any single source may not meet the desired distribution requirements. Therefore, a union of data from multiple sources is often required. In this paper, we study how to acquire such data in the most cost effective manner, for typical cost functions observed in practice. We present an optimal solution for binary groups when the underlying distributions of data sources are known and all data sources have equal costs. For the generic case with unequal costs, we design an approximation algorithm that performs well in practice. When the underlying distributions are unknown, we develop an exploration-exploitation based strategy with a reward function that captures the cost and approximations of group distributions in each data source. Besides theoretical analysis, we conduct comprehensive experiments that confirm the effectiveness of our algorithms.


2012 ◽  
Vol 19 (1) ◽  
pp. 69-80 ◽  
Author(s):  
S. Zwieback ◽  
K. Scipal ◽  
W. Dorigo ◽  
W. Wagner

Abstract. The validation of geophysical data sets (e.g. derived from models, exploration techniques or remote sensing) presents a formidable challenge as all products are inherently different and subject to errors. The collocation technique permits the retrieval of the error variances of different data sources without the need to specify one data set as a reference. In addition calibration constants can be determined to account for biases and different dynamic ranges. The method is frequently applied to the study and comparison of remote sensing, in-situ and modelled data, particularly in hydrology and oceanography. Previous studies have almost exclusively focussed on the validation of three data sources; in this paper it is shown how the technique generalizes to an arbitrary number of data sets. It turns out that only parts of the covariance structure can be resolved by the collocation technique, thus emphasizing the necessity of expert knowledge for the correct validation of geophysical products. Furthermore the bias and error variance of the estimators are derived with particular emphasis on the assumptions necessary for establishing those characteristics. Important properties of the method, such as the structural deficiencies, dependence of the accuracy on the number of measurements and the impact of violated assumptions, are illustrated by application to simulated data.


Sign in / Sign up

Export Citation Format

Share Document