Using Journal Impact Factor to Assess Scholarly Records: Overcorrecting for the Potter Stewart Approach to Promotion and Tenure

The Forum ◽  
2019 ◽  
Vol 17 (2) ◽  
pp. 257-269
Author(s):  
Elizabeth A. Oldmixon ◽  
J. Tobin Grant

Abstract Promotion and tenure decisions frequently require an assessment of the quality of a candidate’s research record. Without carefully specifying what constitutes a tenurable and promotable record, departments frequently adopt the Potter Stewart approach – they know it when they see it. The benefit of such a system is that it allows for multiple paths to tenure and promotion and encourages holistic review, but the drawback is that it allows for the promotion and tenure process to be more easily manipulated by favoritism and bias. Incorporating transparent metrics such as journal impact factor (JIF) would seem like a good way to standardize the process. We argue, however, that when JIF becomes determinative, conceptual disadvantages and systematic biases are introduced into the process. JIF indicates the visibility or utility of a journal; it does not and cannot tell us about individual articles published in that journal. Moreover, it creates inequitable paths to tenure on the basis of gender and subfield, given gendered patterns of publications and the variation in journal economies by subfield.

2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


2017 ◽  
Vol 402 (7) ◽  
pp. 1015-1022 ◽  
Author(s):  
Usama Ahmed Ali ◽  
Beata M. M. Reiber ◽  
Joren R. ten Hove ◽  
Pieter C. van der Sluis ◽  
Hein G. Gooszen ◽  
...  

2019 ◽  
Author(s):  
Erin C. McKiernan ◽  
Lesley A. Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T. Niles ◽  
...  

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.


NEMESIS ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 18
Author(s):  
Aleksandra Hebda ◽  
Guillaume A Odri ◽  
Raphael Olszewski

Objective: to develop and test inter-observer reproducibility of instructions for authors quality rating (IAQR) tool measuring the quality of instructions for authors at journal level for a possible improvement of editorial guidelines.Material and methods: instructions for authors of 75 dental and maxillofacial surgery journals were assessed by two independent observers using assessment tool inspired from AGREE with 16 questions and 1 to 4 points scale per answer. Two observers evaluated the instructions of authors independently and blind to impact factor of a given journal. Scores obtained from our tool were compared with “journal impact factor 2013”. Results: IAQR presented with an excellent interobserver reproducibility (κ= 0.81) despite a difference in data distribution between observers. There existed a weak positive correlation between IAQR and “journal impact factor 2013”. Conclusions: The IAQR is a reproducible quality assessment tool at the journal level. The IAQR assess the quality of instruction for authors and it is a goodstarting point for possible improvements of the instructions for authors, especially when it comes to their completeness. Nemesis relevance: 28% of dental and maxillofacial journals might revise their instructions for authors to provide more up-to-date version.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Steven Peters ◽  
Aaron Switzer ◽  
Shivanand Patil ◽  
Cheryl R McCreary ◽  
Martin Dichgans ◽  
...  

Introduction: The quality of reporting of neuroimaging methods for studies of cerebral small vessel disease is unknown. We systematically reviewed studies of MRI white matter hyperintensities (WMH) of vascular origin to determine the frequency of reporting of key aspects of neuroimaging methods, and whether reporting varied by sample size, study design or journal impact factor. Methods: Three raters independently reviewed 100 consecutive papers reporting WMH severity, either as a primary outcome or covariate, to abstract 50 study characteristics based on the published STRIVE standards (Wardlaw et al Lancet Neurol 2013). Final determinations were made by consensus. An aggregate quality score (range 0-11) was created by adding one point for reporting of each of 11 key characteristics (Table). Spearman correlation or chi-square test, as appropriate, were used to test associations with quality score. Results: Papers were published between 2009 and 2013 with journal impact factors ranging from 0.56 to 15.3, with cohort (79%) and case control (21%) studies represented. Quantitative computational methods were used in 28 studies. MR field strength, MRI sequence types, type of WMH measurement method, blinding and number of raters were reported frequently, but reporting of other characteristics was inconsistent (Table). Study quality score was not correlated with journal impact factor, sample size or cohort study design. Conclusions: There is inconsistent reporting of neuroimaging methods in the small vessel disease imaging literature. Increased adherence to published reporting standards, such as the STRIVE criteria, may facilitate more objective peer review of submitted manuscripts and increase the reproducibility of published results. More work is needed to facilitate adoption of standards and checklists by authors, reviewers and editors.


2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.


2012 ◽  
Vol 34 (1) ◽  
pp. 38-41
Author(s):  
Caroline Black

Bibliometrics is the term used to describe various approaches to analysing measures of the use of academic literature, in particular articles in peer-reviewed journals. More broadly, the topic addresses the validity or otherwise of these measures as indicators of the impact, influence or value of the research being reported. These measures, and in particular the journal Impact Factor, are used as evidence for the quality of research, to make decisions about appointments, to judge a journal editor's success, and (it is assumed) to make funding decisions. Until recently, bibliometrics was mainly about citations, but now it is increasingly common to measure online usage, and even tweets, blogging and user star-ratings when assessing the contribution of a published research article.


Sign in / Sign up

Export Citation Format

Share Document