Sustainability university rankings: a comparative analysis of UI green metric and the times higher education world university rankings

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Barbara Galleli ◽  
Noah Emanuel Brito Teles ◽  
Joyce Aparecida Ramos dos Santos ◽  
Mateus Santos Freitas-Martins ◽  
Flavio Hourneaux Junior

Purpose This study aims to answer the research question: How to evaluate the structure of global university sustainability rankings according to the Berlin Principles (BP) framework. Design/methodology/approach The authors investigated two global sustainability rankings in universities, The UI green metric World University Ranking (WUR) and the Times Higher Education World University Rankings (THE-WUR). The authors performed content analysis regarding their evaluation criteria and assessed both rankings using the BP framework. Findings Results show that there is still a gap to be filled regarding the specificity of global university sustainability rankings. Although the THE-WUR had a better performance in this research, there are several items for improvement, especially regarding the methodological procedures. There are structural differences, limitations and points for improvement in both rankings. Besides, it may not be possible to have a unique and more appropriate ranking, but one that can be more suitable for a contextual reality. Practical implications This study can be helpful for university managers when deliberating on the most appropriate ranking for their institutions and better preparing their higher education institutions for participating in sustainability-related rankings. Besides, it suggests possible improvements on the rankings’ criteria. Social implications The authors shed light on challenges for improving the existing university sustainability rankings, besides generating insights for developing new ones. In a provocative but constructive perspective, the authors question their bases and understandings of being “the best university” regarding sustainability. Originality/value This is the first study that provides an in-depth analysis and comparison between two of the most important global university sustainability rankings.

2020 ◽  
Vol 28 (1) ◽  
pp. 78-88 ◽  
Author(s):  
Maruša Hauptman Komotar

Purpose This paper aims to investigate how global university rankings interact with quality and quality assurance in higher education along the two lines of investigation, that is, from the perspective of their relationship with the concept of quality (assurance) and the development of quality assurance policies in higher education, with particular emphasis on accreditation as the prevalent quality assurance approach. Design/methodology/approach The paper firstly conceptualises quality and quality assurance in higher education and critically examines the methodological construction of the four selected world university rankings and their references to “quality”. On this basis, it answers the two “how” questions: How is the concept of quality (assurance) in higher education perceived by world university rankings and how do they interact with quality assurance and accreditation policies in higher education? Answers are provided through the analysis of different documentary sources, such as academic literature, glossaries, international studies, institutional strategies and other documents, with particular focus on official websites of international ranking systems and individual higher education institutions, media announcements, and so on. Findings The paper argues that given their quantitative orientation, it is quite problematic to perceive world university rankings as a means of assessing or assuring the institutional quality. Like (international) accreditations, they may foster vertical differentiation of higher education systems and institutions. Because of their predominant accountability purpose, they cannot encourage improvements in the quality of higher education institutions. Practical implications Research results are beneficial to different higher education stakeholders (e.g. policymakers, institutional leadership, academics and students), as they offer them a comprehensive view on rankings’ ability to assess, assure or improve the quality in higher education. Originality/value The existing research focuses principally either on interactions of global university rankings with the concept of quality or with processes of quality assurance in higher education. The comprehensive and detailed analysis of their relationship with both concepts thus adds value to the prevailing scholarly debates.


Subject The state of higher education and employment. Significance The Times Higher Education World University Rankings 2015-16 published in January for the first time a list of the top fifteen universities in the Arab world. The publication has combined with the listing of the first QS World University Rankings on the Arab Region. Most Middle East and North Africa (MENA) governments have high youth unemployment, and quality education is viewed as a crucial step to ease it. Impacts Bahrain's financial crisis is already fuelling concerns that standards at the University of Bahrain are dropping. Yet, even in the richer states, such as the UAE and Saudi Arabia, cheap oil is likely to cut funding for education. Meanwhile, Gulf employment prospects are reducing as the private sector is small and cheap oil is restricting government jobs and spending.


2020 ◽  
Vol 1 (3) ◽  
pp. 1109-1135
Author(s):  
Friso Selten ◽  
Cameron Neylon ◽  
Chun-Kai Huang ◽  
Paul Groth

Pressured by globalization and demand for public organizations to be accountable, efficient, and transparent, university rankings have become an important tool for assessing the quality of higher education institutions. It is therefore important to assess exactly what these rankings measure. Here, the three major global university rankings—the Academic Ranking of World Universities, the Times Higher Education ranking and the Quacquarelli Symonds World University Rankings—are studied. After a description of the ranking methodologies, it is shown that university rankings are stable over time but that there is variation between the three rankings. Furthermore, using principal component analysis and exploratory factor analysis, we demonstrate that the variables used to construct the rankings primarily measure two underlying factors: a university’s reputation and its research performance. By correlating these factors and plotting regional aggregates of universities on the two factors, differences between the rankings are made visible. Last, we elaborate how the results from these analysis can be viewed in light of often-voiced critiques of the ranking process. This indicates that the variables used by the rankings might not capture the concepts they claim to measure. The study provides evidence of the ambiguous nature of university rankings quantification of university performance.


2020 ◽  
pp. 1-25
Author(s):  
Lokman I. Meho

This study uses the checklist method, survey studies, and Highly Cited Researchers to identify 100 highly prestigious international academic awards. The study then examines the impact of using these awards on the Academic Ranking of World Universities (the Shanghai Ranking), the QS World University Rankings, and the Times Higher Education World University Rankings. Results show that awards considerably change the rankings and scores of top universities, especially those that receive a large number of awards and those that receive few or no awards. The rankings of all other universities with relatively similar numbers of awards remain intact. If given 20% weight, as was the case in this study, awards help ranking systems set universities further apart from each other, making it easier for users to detect differences in the levels of performance. Adding awards to ranking systems benefits United States universities the most as a result of winning 58% of 1,451 awards given in 2010–2019. Developers of ranking systems should consider adding awards as a variable in assessing the performance of universities. Users of university rankings should pay attention to both ranking positions and scores.


2014 ◽  
Vol 28 (2) ◽  
pp. 230-245 ◽  
Author(s):  
Philip Hallinger

Purpose – The region's universities are “riding a tiger” of university rankings in East Asian higher education, in a race to gain in the list of the world's top 100 universities. While this race impacts universities throughout the world, it takes on particular importance in East Asia due to the stage of university development and the needs of regional societies. The purposes of this paper are to: To examine the emergent global emphasis on world university ranking as a driver of change in higher education, To discuss how the world university rankings are impacting East Asian universities, To assess consequences for higher education in the region, To explore options for leading universities in a more meaningful direction in East Asia. Design/methodology/approach – This paper examines research and commentary on the impact of world university rankings on universities in East Asia. Findings – This paper proposes that the world university rankings have, over a relatively short period of time, had unanticipated but potentially insidious effects on higher education in East Asia. This paper proposes that the “tiger” is carrying most East Asian universities towards goals that may not reflect the aspirations of their societies, or the people that work and study in them. Yet, climbing off the “tiger” often feels just as risky as hanging on to its back. Instead of seeking to lay blame at any one party, the paper suggests that the problem is systemic in nature. Multiple parts of the system need to change in order to achieve effects in the distal parts (e.g. faculty, students, and society). Only leadership can bring about this type of change. The scholarly community must gain some degree of input and monitoring over the rules of the rankings game. Only by joining hands can university leaders in the region change the “Ranking Game” to reflect the reality and needs of university development and social contribution in East Asia. Only by cooperation can the region's university leaders create reciprocal pressure on other parts of the system. In response to systemic problems, “I” may be powerless, but “we” are not. Originality/value – The originality and value of this paper lie in its aim to elevate underlying dissatisfaction with the rankings into a broader and more explicit debate over the direction in which East Asian universities are riding on the back of the tiger.


2021 ◽  
Author(s):  
Elizabeth Gadd ◽  
Richard Holmes ◽  
Justin Shearer

Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.


Author(s):  
Nattapong Techarattanased ◽  
Pleumjai Sinarkorn

Many universities have drawn attention to world university rankings, which reflect the international competition of universities and represent their relative statuses. This study does not radically contradict all types of global university rankings but calls for an examination of the effects of their indicators on the final ranking of universities. This study investigates the indicator contribution to the ranking of universities in world university ranking systems including the Academic Ranking of World Universities (ARWU), Times Higher Education (THE), and QS World University Rankings. Results showed that in the ARWU system, three indicators regarding faculty members who won Nobel Prizes and Fields Medals and papers published in Nature and Science and in the Science Citation Index and Social Science Citation Index journals predicted the ranking of universities. For the QS and THE systems, the more powerful contributors to the ranking of universities were expert-based reputation indicators.


Sign in / Sign up

Export Citation Format

Share Document