scholarly journals Impact Factors Workgroup Report

Author(s):  
Susie Allard ◽  
Ali Andalibi ◽  
Patty Baskin ◽  
Marilyn Billings ◽  
Eric Brown ◽  
...  

Following up on recommendations from OSI 2016, this team will dig deeper into the question of developing and recommending new tools to repair or replace the journal impact factor (and/or how it is used), and propose actions the OSI community can take between now and the next meeting. What’s needed? What change is realistic and how will we get there from here?

2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2019 ◽  
Vol 58 (2) ◽  
pp. 282-300
Author(s):  
Felicitas Hesselmann ◽  
Cornelia Schendzielorz

This contribution seeks to provide a more detailed insight into the entanglement of value and measurement. Drawing on insights from semiotics and a Bourdieusian perspective on language as an economy of linguistic exchange, we develop the theoretical concept of value-measurement links and distinguish three processes – operationalisation, nomination, and indetermination – as forms in which these links can be constructed. We illustrate these three processes using (e)valuation practices in science, particularly the journal impact factor, as an empirical object of investigation. As this example illustrates, measured values can function as building blocks for further measurements, and thus establish chains of evaluations, where it becomes more and more obscure which values the measurements actually express. We conclude that in the case of measured values such as impact factors, these chains are driven by the interplay between the interpretative openness of language and the seeming tendency of numbers to fixate meaning thus continually re-creating, transforming and modifying values.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jian Zhou ◽  
Lin Feng ◽  
Ning Cai ◽  
Jie Yang

The variation of the journal impact factor is affected by many statistical and sociological factors such as the size of citation window and subject difference. In this work, we develop an impact factor dynamics model based on the parallel system, which can be used to analyze the correlation between the impact factor and certain elements. The parallel model aims to simulate the submission and citation behaviors of the papers in journals belonging to a similar subject, in a distributed manner. We perform Monte Carlo simulations to show how the model parameters influence the impact factor dynamics. Through extensive simulations, we reveal the important role that certain statistics elements and behaviors play to affect impact factors. The experimental results and analysis on actual data demonstrate that the value of the JIF is comprehensively influenced by the average review time, average number of references, and aging distribution of citation.


2010 ◽  
Vol 90 (11) ◽  
pp. 1631-1640 ◽  
Author(s):  
Leonardo Oliveira Pena Costa ◽  
Anne M. Moseley ◽  
Catherine Sherrington ◽  
Christopher G. Maher ◽  
Robert D. Herbert ◽  
...  

Objective The objective of this study was to identify core journals in physical therapy by identifying those that publish the most randomized controlled trials of physical therapy interventions, provide the highest-quality reports of randomized controlled trials, and have the highest journal impact factors. Design This study was an audit of a bibliographic database. Methods All trials indexed in the Physiotherapy Evidence Database (PEDro) were analyzed. Journals that had published at least 80 trials were selected. The journals were ranked in 4 ways: number of trials published; mean total PEDro score of the trials published in the journal, regardless of publication year; mean total PEDro score of the trials published in the journal from 2000 to 2009; and 2008 journal impact factor. Results The top 5 core journals in physical therapy, ranked by the total number of trials published, were Archives of Physical Medicine and Rehabilitation, Clinical Rehabilitation, Spine, British Medical Journal (BMJ), and Chest. When the mean total PEDro score was used as the ranking criterion, the top 5 journals were Journal of Physiotherapy, Journal of the American Medical Association (JAMA), Stroke, Spine, and Clinical Rehabilitation. When the mean total PEDro score of the trials published from 2000 to 2009 was used as the ranking criterion, the top 5 journals were Journal of Physiotherapy, JAMA, Lancet, BMJ, and Pain. The most highly ranked physical therapy–specific journals were Physical Therapy (ranked eighth on the basis of the number of trials published) and Journal of Physiotherapy (ranked first on the basis of the quality of trials). Finally, when the 2008 impact factor was used for ranking, the top 5 journals were JAMA, Lancet, BMJ, American Journal of Respiratory and Critical Care Medicine, and Thorax. There were no significant relationships among the rankings on the basis of trial quality, number of trials, or journal impact factor. Conclusions Physical therapists who are trying to keep up-to-date by reading the best available evidence on the effects of physical therapy interventions have to read more broadly than just physical therapy–specific journals. Readers of articles on physical therapy trials should be aware that high-quality trials are not necessarily published in journals with high impact factors.


API Magazin ◽  
2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Hannah Hirschberg

Obwohl der Journal Impact Factor seit seiner Entwicklung und Etablierung in den 1960er und 1970er Jahren immer mehr in Kritik gekommen ist, ist er noch heute, fast 60 Jahre später, einer der wichtigsten Indikatoren zur Messung des Einflusses von wissenschaftlichen Fachzeitschriften. In dieser Hausarbeit wird neben der Entste-hung, die Bedeutung des Journal Impact Factors (kurz: JIF) für den allgemeinen wissenschaftlichen Wettbewerb erläutert. Darüber hinaus wird auf Schwächen und Probleme eingegangen, die den JIF manipulierbar machen. Abschließend werden verschiedene alternative Indikatoren vorgestellt, folgend von einem Fazit, in dem der Einfluss und die Zukunft des Journal Impact Factors zusammengefasst werden.


2021 ◽  
Author(s):  
Brian D. Cameron

Librarians rely on the Institute for Scientific Information’s journal impact factor as a tool for selecting periodicals, primarily in scientific disciplines. A current trend is to use this data as a means for evaluating the performance of departments, institutions, and even researchers in academic institutions—a process that is now being tied to tenure and promotion—despite the fact that such usage can be misleading and prejudicial. This paper will highlight the history of the development of impact factors, describe the limitations in their use, and provide a critique of the usage of impact factors in academic settings.


Sign in / Sign up

Export Citation Format

Share Document