scholarly journals Use of the journal impact factor for assessing individual articles: Statistically flawed or not?

F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.

F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


2020 ◽  
Author(s):  
John Antonakis ◽  
Nicolas Bastardoz ◽  
Philippe Jacquart

The impact factor has been criticized on several fronts, including that the distribution of citations to journal articles is heavily skewed. We nuance these critiques and show that the number of citations an article receives is significantly predicted by journal impact factor. Thus, impact factor can be used as a reasonably good proxy of article quality.


2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


eLife ◽  
2013 ◽  
Vol 2 ◽  
Author(s):  
Randy Schekman ◽  
Mark Patterson

It is time for the research community to rethink how the outputs of scientific research are evaluated and, as the San Francisco Declaration on Research Assessment makes clear, this should involve replacing the journal impact factor with a broad range of more meaningful approaches.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2016 ◽  
Vol 42 (4) ◽  
pp. 324-337 ◽  
Author(s):  
Chia-Lin Chang ◽  
Michael McAleer

Purpose – Both journal self-citations and exchanged citations have the effect of increasing a journal’s impact factor, which may be deceptive. The purpose of this paper is to analyse academic journal quality and research impact using quality-weighted citations vs total citations, based on the widely used Thomson Reuters ISI Web of Science citations database (ISI). A new Index of Citations Quality (ICQ) is presented, based on quality-weighted citations. Design/methodology/approach – The new index is used to analyse the leading 500 journals in both the sciences and social sciences, as well as finance and accounting, using quantifiable Research Assessment Measures (RAMs) that are based on alternative transformations of citations. Findings – It is shown that ICQ is a useful additional measure to 2-year impact factor (2YIF) and other well-known RAMs for the purpose of evaluating the impact and quality, as well as ranking, of journals as it contains information that has very low correlations with the information contained in the well-known RAMs for both the sciences and social sciences, and finance and accounting. Practical implications – Journals can, and do, inflate the number of citations through self-citation practices, which may be coercive. Another method for distorting journal impact is through a set of journals agreeing to cite each other, that is, by exchanging citations. This may be less coercive than self-citations, but is nonetheless unprofessional and distortionary. Social implications – The premise underlying the use of citations data is that higher quality journals generally have a higher number of citations. The impact of citations can be distorted in a number of ways, both consciously and unconsciously. Originality/value – Regardless of whether self-citations arise through collusive practices, the increase in citations will affect both 2YIF and 5-year impact factor (5YIF), though not Eigenfactor and Article Influence. This leads to an ICQ, where a higher ICQ would generally be preferred to lower. Unlike 5YIF, which is increased by journal self-citations and exchanged citations, and Eigenfactor and Article Influence, both of which are affected by quality-weighted exchanged citations, ICQ will be less affected by exchanged citations. In the absence of any empirical evidence to the contrary, 5YIF and AI are assumed to be affected similarly by exchanged citations.


2019 ◽  
Vol 40 (10) ◽  
pp. 1136-1142 ◽  
Author(s):  
Malke Asaad ◽  
Austin Paul Kallarackal ◽  
Jesse Meaike ◽  
Aashish Rajesh ◽  
Rafael U de Azevedo ◽  
...  

Abstract Background Citation skew refers to the unequal distribution of citations to articles published in a particular journal. Objectives We aimed to assess whether citation skew exists within plastic surgery journals and to determine whether the journal impact factor (JIF) is an accurate indicator of the citation rates of individual articles. Methods We used Journal Citation Reports to identify all journals within the field of plastic and reconstructive surgery. The number of citations in 2018 for all individual articles published in 2016 and 2017 was abstracted. Results Thirty-three plastic surgery journals were identified, publishing 9823 articles. The citation distribution showed right skew, with the majority of articles having either 0 or 1 citation (40% and 25%, respectively). A total of 3374 (34%) articles achieved citation rates similar to or higher than their journal’s IF, whereas 66% of articles failed to achieve a citation rate equal to the JIF. Review articles achieved higher citation rates (median, 2) than original articles (median, 1) (P < 0.0001). Overall, 50% of articles contributed to 93.7% of citations and 12.6% of articles contributed to 50% of citations. A weak positive correlation was found between the number of citations and the JIF (r = 0.327, P < 0.0001). Conclusions Citation skew exists within plastic surgery journals as in other fields of biomedical science. Most articles did not achieve citation rates equal to the JIF with a small percentage of articles having a disproportionate influence on citations and the JIF. Therefore, the JIF should not be used to assess the quality and impact of individual scientific work.


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


Sign in / Sign up

Export Citation Format

Share Document