scholarly journals Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Erin C McKiernan ◽  
Lesley A Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T Niles ◽  
...  

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms. Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

2019 ◽  
Author(s):  
Erin C. McKiernan ◽  
Lesley A. Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T. Niles ◽  
...  

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.


Author(s):  
Erin C. McKiernan ◽  
Lesley A. Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T. Niles ◽  
...  

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.


2019 ◽  
Author(s):  
Erin C. McKiernan ◽  
Lesley A. Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T. Niles ◽  
...  

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.


2019 ◽  
Author(s):  
Erin C McKiernan ◽  
Lesley A Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T Niles ◽  
...  

The Forum ◽  
2019 ◽  
Vol 17 (2) ◽  
pp. 257-269
Author(s):  
Elizabeth A. Oldmixon ◽  
J. Tobin Grant

Abstract Promotion and tenure decisions frequently require an assessment of the quality of a candidate’s research record. Without carefully specifying what constitutes a tenurable and promotable record, departments frequently adopt the Potter Stewart approach – they know it when they see it. The benefit of such a system is that it allows for multiple paths to tenure and promotion and encourages holistic review, but the drawback is that it allows for the promotion and tenure process to be more easily manipulated by favoritism and bias. Incorporating transparent metrics such as journal impact factor (JIF) would seem like a good way to standardize the process. We argue, however, that when JIF becomes determinative, conceptual disadvantages and systematic biases are introduced into the process. JIF indicates the visibility or utility of a journal; it does not and cannot tell us about individual articles published in that journal. Moreover, it creates inequitable paths to tenure on the basis of gender and subfield, given gendered patterns of publications and the variation in journal economies by subfield.


2015 ◽  
Author(s):  
Howard I. Browman

See a video of the presentation.Quantifying the relative performance of individual scholars, groups of scholars, departments, institutions,  provinces/states/regions and countries has become an integral part of decision-making over research policy,  funding allocations, awarding of grants, faculty hirings, and claims for promotion and tenure. Bibliometric  indices (based mainly upon citation counts), such as the h-index and the journal impact factor (JIF), are heavily  relied upon in such assessments. There is a growing consensus, and a deep concern, that these indices - moreand-more often used as a replacement for the informed judgment of peers - are misunderstood and are,  therefore, often misinterpreted and misused. Although much has been written about the JIF, some combination  of its biases and limitations will be true of any citation-based metric. While it is not my contention that  bibliometric indices have no value, they should not be applied as performance metrics without a thorough and  insightful understanding of their (few?) strengths and (many?) weaknesses. I will present a range of analyses in  support of this conclusion. Alternative approaches, tools and metrics, that will hopefully lead to a more  balanced role for these instruments, will also be presented.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


Sign in / Sign up

Export Citation Format

Share Document