scholarly journals Metrics: Measuring importance

2012 ◽  
Vol 34 (1) ◽  
pp. 38-41
Author(s):  
Caroline Black

Bibliometrics is the term used to describe various approaches to analysing measures of the use of academic literature, in particular articles in peer-reviewed journals. More broadly, the topic addresses the validity or otherwise of these measures as indicators of the impact, influence or value of the research being reported. These measures, and in particular the journal Impact Factor, are used as evidence for the quality of research, to make decisions about appointments, to judge a journal editor's success, and (it is assumed) to make funding decisions. Until recently, bibliometrics was mainly about citations, but now it is increasingly common to measure online usage, and even tweets, blogging and user star-ratings when assessing the contribution of a published research article.

2020 ◽  
Author(s):  
Mir Ibrahim Sajid ◽  
Hafsa Khan Tareen ◽  
Samira Shabbir Balouch ◽  
Syed Muhammad Awais

The Journal Impact Factor is a Science Citation Index developed metric to evaluate the citations an article receives over a period of two years and serves as a surrogate marker to evaluate the quality of biomedical research. However, even though the calculation seems to be a straightforward mathematical equation, multiple confounders artificially impact the score- such as citing behavior, the region and language the journal is published in, and the ‘tip of the iceberg’ phenomenon. Despite an increase in metrics developed to alternatively gauge the prestige of research and the researcher- such as Eigenfactor Score, Article Influence Score and Google PageRank, the impact factor remains an essential instrument in dictating the scientist’s future in terms of job security, tenure extension, grant approval, and acquiring bonus, both hierarchical and monetary


2017 ◽  
Author(s):  
Frieder Michel Paulus ◽  
Nicole Cruz ◽  
Sören Krach

AbstractThe use of the journal impact factor (JIF) as a measure for the quality of individual manuscripts and the merits of scientists has faced significant criticism in recent years. We add to the current criticism in arguing that such an application of the JIF in policy and decision making in academia is based on false beliefs and unwarranted inferences. To approach the problem, we use principles of deductive and inductive reasoning to illustrate the fallacies that are inherent to using journal based metrics for evaluating the work of scientists. In doing so, we elaborate that if we judge scientific quality based on the JIF or other journal based metrics we are either guided by invalid or weak arguments or in fact consider our uncertainty about the quality of the work and not the quality itself.


2012 ◽  
Vol 7 (3) ◽  
pp. 90
Author(s):  
Jason Martin

Objective – Determine what characteristics of a journal’s published articles can be used to predict the journal impact factor (JIF). Design – A retrospective cohort study. Setting – The researchers are located at McMaster University, Hamilton, Ontario, Canada. Subjects – The sample consisted of 1,267 clinical research articles from 103 evidence based and clinical journals which were published in 2005 and indexed in the McMaster University Premium LiteratUre Service (PLUS) database and those same journals’ JIF from 2007. Method – The articles were divided 60:40 into a derivation set (760 articles and 99 journals) and a validation set (507 articles and 88 journals). Ten variables which could influence JIF were developed and a multiple linear regression was run on the derivation set and then applied to the validation set. Main Results – The four variables found to be significant were the number of databases which indexed the journal, the number of authors, the quality of research, and the “newsworthiness” of the journal’s published articles. Conclusion – The quality of research and newsworthiness at time of publication of a journal’s articles can predict the journal impact factor with 60% accuracy.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2020 ◽  
Vol 13 (5) ◽  
pp. 723-727
Author(s):  
Alberto Ortiz

Abstract The Clinical Kidney Journal (ckj) impact factor from Clarivate’s Web of Science for 2019 was 3.388. This consolidates ckj among journals in the top 25% (first quartile, Q1) in the Urology and Nephrology field according to the journal impact factor. The manuscripts contributing the most to the impact factor focused on chronic kidney disease (CKD) epidemiology and evaluation, CKD complications and their management, cost-efficiency of renal replacement therapy, pathogenesis of CKD, familial kidney disease and the environment–genetics interface, onconephrology, technology, SGLT2 inhibitors and outcome prediction. We provide here an overview of the hottest and most impactful topics for 2017–19.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


2017 ◽  
Vol 402 (7) ◽  
pp. 1015-1022 ◽  
Author(s):  
Usama Ahmed Ali ◽  
Beata M. M. Reiber ◽  
Joren R. ten Hove ◽  
Pieter C. van der Sluis ◽  
Hein G. Gooszen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document