scholarly journals Pay Peanuts and Get Monkeys? Evidence from Academia

Author(s):  
Glenn Boyle

Abstract In most countries, academic pay is independent of discipline, thus ignoring differences in labor market opportunities. Using some unique data from a comprehensive research assessment exercise undertaken in one such country -- New Zealand -- this paper examines the impact of discipline-independent pay on research quality. I find that the greater the difference between the value of a discipline's outside opportunities and its New Zealand academic salary, the weaker its research performance in New Zealand universities. The latter apparently get what they pay for: disciplines in which opportunity cost is highest relative to the fixed compensation are least able to recruit high-quality researchers. Paying peanuts attracts mainly monkeys.

2019 ◽  
Author(s):  
Julian Hamann

Far from allowing a governance of universities by the invisible hand of market forces, research performance assessments do not just measure differences in research quality, but yield themselves visible symptoms in terms of a stratification and standardization of disciplines. The article illustrates this with a case study of UK history departments and their assessment by the Research Assessment Exercise (RAE) and the Research Excellence Framework (REF), drawing on data from the three most recent assessments (RAE 2001, 2008, REF 2014). Symptoms of stratification are documented by the distribution of memberships in assessment panels, of research active staff, and of external research grants. Symptoms of a standardization are documented by the publications submitted to the assessments. The main finding is that the RAEs/REF and the selective allocation of funds they inform consecrate and reproduce a disciplinary center that, in contrast to the periphery, is well-endowed with grants and research staff, decides in panels over the quality standards of the field, and publishes a high number of articles in high-impact journals. This selectivity is oriented toward previous distributions of resources and a standardized notion of “excellence” rather than research performance.


Author(s):  
Greg Tower ◽  
Brenda Ridgewell

The study examines the impact of national research assessment exercises for the accounting and visual arts disciplines. Analysis is also made of the impact of a national research quality assessment exercise of New Zealand and UK initiatives (Tertiary Education Commission. 2004; RAE, 2001) and well as the proposed Australian RQF (2005). We find that whilst the definition of research is broad enough to include most of the activities of accounting and finance, and visual arts academia the actual measures of research performance may be problematic. The need to clearly demonstrate quality peer review is the largest hurdle especially for visual arts academics with their individualist and independent mindset. Whilst visual arts and, accounting and finance academia research performance activity was ranked low in both the UK and NZ, we conclude that that the focus on output quality and peer assessment offers a potentially broader and more accurate depiction of activity. Obtaining a balanced broader assessment of both traditional performance measures such as research publications of accounting and finance along with the more creative elements of visual arts such as exhibitions is paramount. We also make a call for more research training for both disciplines to assist them in the recognition of quality research productivity.


2005 ◽  
Vol 2 (9) ◽  
Author(s):  
Brenda Ridgewell ◽  
Greg Tower

The study examines the impact of national research assessment exercises for the visual arts discipline in a university structure.  It encompasses issues of evaluation, benchmarking, performance management, performance indicators and explanatory factors.  We find that whilst the definition of research is broad enough to include most of the activities of visual arts academia. The actual operationalisation of the measurement models may well exclude many current activities.  The need to clearly demonstrate quality peer review is the largest hurdle.  Analysis is also made of the impact of a ‘national research quality assessment exercise’ such as the New Zealand and UK initiatives (Tertiary Education Commission. 2004; RAE, 2001).  Whilst visual arts academia research performance activity was ranked low in both countries, we find that their position on the need for quality and peer assessment offers a potentially broader and more accurate depiction of activity.  Obtaining a balanced broader assessment of both traditional performance measures such as research publications along with the more creative elements of visual arts such as exhibitions is paramount. The national assessment exercises show that visual arts academics are struggling to compete with their academic brethren in other disciplines.  We argue the need for national assessment exercises engenders an acceptable peer review system to better assess their broad research activities for non-traditional areas. We also make calls for more research presentation training for the visual arts discipline to assist them in the recognition of quality research productivity.  The implementation of a national research assessment system which focuses more on quality output and outcome measures instead of input measures such as research income will engender this debate.


2010 ◽  
Vol 16 (4) ◽  
pp. 228 ◽  
Author(s):  
Mike Calver

Only those truly cryptozoic for all of 2010 could have missed the bustle and concern created by the Australian Commonwealth?s Excellence in Research for Australia (ERA) initiative (http://www.arc.gov.au/era/default.htm). In common with other national research assessment exercises such as the RAE (UK) and PBRF (New Zealand), ERA is designed to assess research quality within the Australian higher education sector, identifying and rewarding those institutions and departments producing high-quality research. The linkages between achievement, recognition and reward have the potential to shape the research priorities and agendas of institutions and individual researchers.


Nutrients ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1545 ◽  
Author(s):  
Robert Hamlin ◽  
Benjamin Hamlin

This research investigated the performance of the red, octagonal Vienna Convention traffic ‘STOP’ sign as a front of pack (FoP) warning nutritional label. While the Vienna Convention traffic light system is an established FoP label, the potential of the ‘STOP’ sign in the role has not been investigated. The performance of the ‘STOP’ label was compared with that of a single star (low nutritional value) Australasian Health Star Rating (HSR) label using a fractionally replicated Latin square design. The labels were presented on choice diads of cold breakfast cereal packets. The sample of 240 adolescents aged 16–18 was drawn from a secondary school in the South Island of New Zealand. A large and significant main effect was observed at the p < 0.01 level for the difference between the ’STOP’ sign and the control condition (no nutritional FoP label), and at p < 0.05 for the difference between the HSR and the ‘STOP’ label. There was no significant difference between the HSR FoP and the control condition. A significant non-additivity (interaction) (p < 0.01) was also observed via the fractional replication. The results indicate that the Vienna Convention ‘STOP’ sign is worthy of further research with regard to its potential as an FoP nutritional label.


2020 ◽  
Vol 33 (6) ◽  
pp. 1219-1246 ◽  
Author(s):  
Bikram Chatterjee ◽  
Carolyn J. Cordery ◽  
Ivo De Loo ◽  
Hugo Letiche

PurposeIn this paper, we concentrate on the use of research assessment (RA) systems in universities in New Zealand (NZ) and the United Kingdom (UK). Primarily we focus on PBRF and REF, and explore differences between these systems on individual and systemic levels. We ask, these days, in what way(s) the systemic differences between PBRF and REF actually make a difference on how the two RA systems are experienced by academic staff.Design/methodology/approachThis research is exploratory and draws on 19 interviews in which accounting researchers from both countries offer reflections on their careers and how RA (systems) have influenced these careers. The stories they tell are classified by regarding RA in universities as a manifestation of the spectacle society, following Debord (1992) and Flyverbom and Reinecke (2017).FindingsBoth UK and New Zealand academics concur that their research activities and views on research are very much shaped by journal rankings and citations. Among UK academics, there seems to be a greater critical attitude towards the benefits and drawbacks of REF, which may be related to the history of REF in their country. Relatively speaking, in New Zealand, individualism seems to have grown after the introduction of the PBRF, with little active pushback against the system. Cultural aspects may partially explain this outcome. Academics in both countries lament the lack of focus on practitioner issues that the increased significance of RA seems to have evoked.Research limitations/implicationsThis research is context-specific and may have limited applicability to other situations, academics or countries.Practical implicationsRA and RA systems seem to be here to stay. However, as academics we can, and ought to, take responsibility to try to ensure that these systems reflect the future of accounting (research) we wish to create. It is certainly not mainly or solely up to upper management officials to set this in motion, as has occasionally been claimed in previous literature. Some of the academics who participated in this research actively sought to bring about a different future.Originality/valueThis research provides a unique contextual analysis of accounting academics' perspectives and reactions to RA and RA systems and the impact these have had on their careers across two countries. In addition, the paper offers valuable critical reflections on the application of Debord's (1992) notion of the spectacle society in future accounting studies. We find more mixed and nuanced views on RA in academia than many previous studies have shown.


2021 ◽  
Author(s):  
◽  
Amber Tyson

<p>As academia increasingly turns to bibliometric tools to assess research impact, the question of which indicator provides the best measure of research quality is highly debated. Much emphasis has been placed on the value of the h-index, a new bibliometric tool proposed in 2005 which has quickly found favour in the scientific community. One of the first applications of the h-index was carried out by Kelly and Jennions (2006), who found a number of variables could influence the h-index scores of ecologists and evolutionary biologists. To test these findings, this study calculated the h-index scores of New Zealand and Australian researchers teaching in the field of library and information science (LIS). Publication and citation counts were generated using the Web of Science (WoS), where a number of limitations with using the database to calculate h-index scores were identified. We then considered the effect that gender, country of residence, institutional affiliation, and scientific age had on the h-index scores of LIS researchers in New Zealand and Australia. The study found a positive relationship between scientific age and h-index scores, indicating that the length of a scientist's career should be considered when using the h-index. However, analysis also showed that gender, country of residence, and institutional affiliation had no influence on h-index scores.</p>


2002 ◽  
Vol 24 (2) ◽  
pp. 31
Author(s):  
Mike Withnall

Richard Reece says in his Editorial (see page 3) that the Research Assessment Exercise (RAE) has fulfilled its purpose of driving up research quality, and that there is no need for future exercises. The Director of the Science Policy Research Unit at Sussex University has reached a similar conclusion: after three rounds of the RAE there is little scope for further gains in efficiency within university departments and the cost of the exercise exceeds the benefits. But there will continue to be a need for some form of assessment of research. The Funding Councils are accountable for the quality of the work that they support. A lack of periodic assessment could lead to complacency and ossification, and universities established since 1992 may feel that they have not yet had sufficient time to develop top-ranking departments.


Sign in / Sign up

Export Citation Format

Share Document