Publication Specific Impact of Articles Published by Rheological Journals

2005 ◽  
Vol 15 (6) ◽  
pp. 406-409 ◽  
Author(s):  
Martin Kröger

Abstract The Impact Factor of a journal is a quantitative way of assessing its worth and relevance to the academic community it serves. Many librarians see the ratio between Impact Factor and price as a suitable yardstick by which to measure the value of their collections. In addition, the research assessment exercises which, in many countries, are now being carried out on a more formal basis mean that authors submitting original research must publish it in a journal with the highest perceived worth possible in order to secure future funding, job promotions and peer recognition. It has been suspected [T. Opthof, Cardiovasc. Res. 33 (1997) 1; J. Stegmann, Nature 390 (1990) 550], however, that a particular author’s impact is not much related to the journals in which her/he publishes. As will be demonstrated in this letter, the impact of articles published in rheological journals is largely influenced by criteria such as length of article, number of authors, number of cited references.

2016 ◽  
Vol 42 (4) ◽  
pp. 324-337 ◽  
Author(s):  
Chia-Lin Chang ◽  
Michael McAleer

Purpose – Both journal self-citations and exchanged citations have the effect of increasing a journal’s impact factor, which may be deceptive. The purpose of this paper is to analyse academic journal quality and research impact using quality-weighted citations vs total citations, based on the widely used Thomson Reuters ISI Web of Science citations database (ISI). A new Index of Citations Quality (ICQ) is presented, based on quality-weighted citations. Design/methodology/approach – The new index is used to analyse the leading 500 journals in both the sciences and social sciences, as well as finance and accounting, using quantifiable Research Assessment Measures (RAMs) that are based on alternative transformations of citations. Findings – It is shown that ICQ is a useful additional measure to 2-year impact factor (2YIF) and other well-known RAMs for the purpose of evaluating the impact and quality, as well as ranking, of journals as it contains information that has very low correlations with the information contained in the well-known RAMs for both the sciences and social sciences, and finance and accounting. Practical implications – Journals can, and do, inflate the number of citations through self-citation practices, which may be coercive. Another method for distorting journal impact is through a set of journals agreeing to cite each other, that is, by exchanging citations. This may be less coercive than self-citations, but is nonetheless unprofessional and distortionary. Social implications – The premise underlying the use of citations data is that higher quality journals generally have a higher number of citations. The impact of citations can be distorted in a number of ways, both consciously and unconsciously. Originality/value – Regardless of whether self-citations arise through collusive practices, the increase in citations will affect both 2YIF and 5-year impact factor (5YIF), though not Eigenfactor and Article Influence. This leads to an ICQ, where a higher ICQ would generally be preferred to lower. Unlike 5YIF, which is increased by journal self-citations and exchanged citations, and Eigenfactor and Article Influence, both of which are affected by quality-weighted exchanged citations, ICQ will be less affected by exchanged citations. In the absence of any empirical evidence to the contrary, 5YIF and AI are assumed to be affected similarly by exchanged citations.


2012 ◽  
Vol 91 (4) ◽  
pp. 329-333 ◽  
Author(s):  
A. Sillet ◽  
S. Katsahian ◽  
H. Rangé ◽  
S. Czernichow ◽  
P. Bouchard

We sought to compare the Eigenfactor Score™ journal rank with the journal Impact Factor over five years, and to identify variables that may influence the ranking differences between the two metrics. Datasets were retrieved from the Thomson Reuters® and Eigenfactor Score™ Web sites. Dentistry was identified as the most specific medical specialty. Variables were retrieved from the selected journals to be included in a regression linear model. Among the 46 dental journals included in the analysis, striking variations in ranks were observed according to the metric used. The Bland-Altman plot showed a poor agreement between the metrics. The multivariate analysis indicates that the number of original research articles, the number of reviews, the self-citations, and the citing time may explain the differences between ranks. The Eigenfactor Score™ seems to better capture the prestige of a journal than the Impact Factor. In medicine, the bibliometric indicators should focus not only on the overall medical field but also on specialized disciplinary fields. Distinct measures are needed to better describe the scientific impact of specialized medical publications.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


2018 ◽  
Vol 72 (1_suppl) ◽  
pp. 27-33
Author(s):  
Peter R. Griffiths ◽  
Michael W. Blades

In 1955, Eugene Garfield introduced the concept of a journal impact factor as a metric for measuring the importance or influence of scholarly journals. These days a journal's fate is often tied strongly to the impact factor. It is a topic that comes up regularly and a source of concern for the journal because of the slavish focus on metrics in the publishing world and in the academic community. From our perspective, the impact factor is shown to be a poor metric for illustrating the long-term significance of papers published in Applied Spectroscopy. The five-year impact factor is a better indicator for the short-term impact of the papers published in this journal, while the cited half-life and the citing half-life both provide a better measure of the long-term impact of papers published in Applied Spectroscopy. Of the most highly cited papers published in this journal, those that describe innovative data processing techniques have been cited more than papers that describe specific applications of a given technique such as infrared (IR), Raman, or laser-induced breakdown spectroscopy (LIBS).


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


2011 ◽  
Vol 6 (1) ◽  
Author(s):  
Julia Meek ◽  
Marie Garnett ◽  
John Grattan

Universities may invest millions of pounds in the provision of computer hardware without ever seriously considering the educational results such investment may deliver. Equally, academics may be committed to the use of IT in teaching and learning because it is expected of them (cf. Dearing, 1997), and rarely give serious consideration to the impact which the effective use of IT may have on student learning (Lauillard, 1993). The use of the WWW to deliver material in support of university teaching is still in its infancy, yet already two distinct approaches to its use can be seen. The first approach uses the WWW passively to deliver existing lecture notes in a technologically impressive and, perhaps more importantly, highly convenient fashion. The second approach attempts to shape the material delivered to maximize the teaching and learning potential of the WWW and to develop students' skills in the use of the medium. But which approach works more effectively? And how does one balance the needs of an academic community pressured by the Research Assessment Exercise with the need to develop effective teaching and learning strategies which maximize the potential of IT for the academic community, for the students and for their future employers?DOI:10.1080/0968776980060109


2021 ◽  
Vol 3 ◽  
pp. 10
Author(s):  
Li Siang Wong ◽  
Bogna A Drozdowska ◽  
Daniel Doherty ◽  
Terence J Quinn

Background: The ‘impact’ of a scientific paper is a measure of influence in its field. In recent years, traditional, citation-based measures of impact have been complemented by Altmetrics, which quantify outputs including social media footprint. As authors and research institutions seek to increase their visibility both within and beyond the academic community, it is important to identify and compare the determinants of traditional and alternative metrics. We explored this using Stroke – a leading journal in its field. Methods: We described the impact of original research papers published in Stroke (2015-2016) using citation count and Altmetric Attention Score (Altmetrics). Using these two metrics as our outcomes, we assessed univariable and multivariable associations with 21 plausibly relevant publication features. We set the significance threshold at p<0.01. Results: Across 911 papers published in Stroke, there was an average citation count of 21.60 (±17.40) and Altmetric score of 17.99 (±47.37). The two impact measures were weakly correlated (r=0.15, p<0.001). Citations were independently associated with five publication features at a significance level of p<0.01: Time Since Publication (beta=0.87), Number of Authors (beta=0.22), Publication Type (beta=6.76), Number of Previous Publications (beta=0.01) and Editorial (beta=9.45). For Altmetrics, we observed a trend for independent associations with: Time Since Publication (beta=-0.25, p=0.02), Number of References (beta=0.32, p=0.02) and Country of Affiliation (beta=8.59, p=0.01). Our models explained 21% and 3% of variance in citations and Altmetrics, respectively. Conclusion: Papers published in Stroke have impact. Certain aspects of content and format may contribute to impact, but these differ for traditional measures and Altmetrics, and explain only a very modest proportion of variance in the latter. Citation counts and Altmetrics seem to represent different constructs and, therefore, should be used in conjunction to allow a more comprehensive assessment of publication impact.


Sign in / Sign up

Export Citation Format

Share Document