scholarly journals Modeling and Simulation Analysis of Journal Impact Factor Dynamics Based on Submission and Citation Rules

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jian Zhou ◽  
Lin Feng ◽  
Ning Cai ◽  
Jie Yang

The variation of the journal impact factor is affected by many statistical and sociological factors such as the size of citation window and subject difference. In this work, we develop an impact factor dynamics model based on the parallel system, which can be used to analyze the correlation between the impact factor and certain elements. The parallel model aims to simulate the submission and citation behaviors of the papers in journals belonging to a similar subject, in a distributed manner. We perform Monte Carlo simulations to show how the model parameters influence the impact factor dynamics. Through extensive simulations, we reveal the important role that certain statistics elements and behaviors play to affect impact factors. The experimental results and analysis on actual data demonstrate that the value of the JIF is comprehensively influenced by the average review time, average number of references, and aging distribution of citation.

2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2021 ◽  
pp. 082957352110455
Author(s):  
Randy G. Floyd ◽  
Emily K. Lewis ◽  
Kelsey A. Walker ◽  
Patrick J. McNicholas ◽  
Kerry L. Jones

School psychology journals yield hundreds of articles each year. As these journals are often evaluated based on the impact factors they produce, the aim of this study was to provide a historically complete record of the five impact factor values for the generalist school psychology journals that yield them. This study identified impact factors beginning in 1977, 20 years earlier than previously reported, and ending in 2019. Across all years and journals, the average Journal Impact Factor (JIF) was about 1.0, the average Immediacy Index was less than 0.4, the average 5-year Impact Factor was about 2.3, the average original CiteScore was 1.8, and the average new CiteScore was about 3.0. Increases in values were evident across time, and the highest recorded values across journals are held by the Journal of School Psychology (for the JIF, 5-year Impact Factor, and both CiteScore metrics) and School Psychology Review (for the Immediacy Index). Most impact factors, with the exception of the Immediacy Index, were moderately to highly correlated. The new CiteScore values were always the highest, and Immediacy Index values were always the lowest. School psychology has added journals to the list of those indexed by major databases, and these journals have increased their impact over time.


Author(s):  
Susie Allard ◽  
Ali Andalibi ◽  
Patty Baskin ◽  
Marilyn Billings ◽  
Eric Brown ◽  
...  

Following up on recommendations from OSI 2016, this team will dig deeper into the question of developing and recommending new tools to repair or replace the journal impact factor (and/or how it is used), and propose actions the OSI community can take between now and the next meeting. What’s needed? What change is realistic and how will we get there from here?


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2013 ◽  
Vol 51 (1) ◽  
pp. 173-189 ◽  
Author(s):  
David I Stern

Academic economists appear to be intensely interested in rankings of journals, institutions, and individuals. Yet there is little discussion of the uncertainty associated with these rankings. To illustrate the uncertainty associated with citations-based rankings, I compute the standard error of the impact factor for all economics journals with a five-year impact factor in the 2011 Journal Citations Report. I use these to derive confidence intervals for the impact factors as well as ranges of possible rank for a subset of thirty journals. I find that the impact factors of the top two journals are well defined and set these journals apart in a clearly defined group. An elite group of 9–11 mainstream journals can also be fairly reliably distinguished. The four bottom ranked journals are also fairly clearly set apart. For the remainder of the distribution, confidence intervals overlap and rankings are quite uncertain. (JEL A14)


2019 ◽  
Vol 58 (2) ◽  
pp. 282-300
Author(s):  
Felicitas Hesselmann ◽  
Cornelia Schendzielorz

This contribution seeks to provide a more detailed insight into the entanglement of value and measurement. Drawing on insights from semiotics and a Bourdieusian perspective on language as an economy of linguistic exchange, we develop the theoretical concept of value-measurement links and distinguish three processes – operationalisation, nomination, and indetermination – as forms in which these links can be constructed. We illustrate these three processes using (e)valuation practices in science, particularly the journal impact factor, as an empirical object of investigation. As this example illustrates, measured values can function as building blocks for further measurements, and thus establish chains of evaluations, where it becomes more and more obscure which values the measurements actually express. We conclude that in the case of measured values such as impact factors, these chains are driven by the interplay between the interpretative openness of language and the seeming tendency of numbers to fixate meaning thus continually re-creating, transforming and modifying values.


2020 ◽  
Vol 13 (5) ◽  
pp. 723-727
Author(s):  
Alberto Ortiz

Abstract The Clinical Kidney Journal (ckj) impact factor from Clarivate’s Web of Science for 2019 was 3.388. This consolidates ckj among journals in the top 25% (first quartile, Q1) in the Urology and Nephrology field according to the journal impact factor. The manuscripts contributing the most to the impact factor focused on chronic kidney disease (CKD) epidemiology and evaluation, CKD complications and their management, cost-efficiency of renal replacement therapy, pathogenesis of CKD, familial kidney disease and the environment–genetics interface, onconephrology, technology, SGLT2 inhibitors and outcome prediction. We provide here an overview of the hottest and most impactful topics for 2017–19.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2020 ◽  
Author(s):  
John Antonakis ◽  
Nicolas Bastardoz ◽  
Philippe Jacquart

The impact factor has been criticized on several fronts, including that the distribution of citations to journal articles is heavily skewed. We nuance these critiques and show that the number of citations an article receives is significantly predicted by journal impact factor. Thus, impact factor can be used as a reasonably good proxy of article quality.


Sign in / Sign up

Export Citation Format

Share Document