Peer Review Metrics and their influence on the Journal Impact

2021 ◽  
pp. 016555152110597
Author(s):  
Sumeer Gul ◽  
Aasif Ahmad Mir ◽  
Sheikh Shueb ◽  
Nahida Tun Nisa ◽  
Salma Nisar

The manuscript processing timeline, a necessary facet of the publishing process, varies from journal to journal, and its influence on the journal impact needs to be studied. The current research looks into the correlation between the ‘Peer Review Metrics’ (submission to first editorial decision; submission to first post-review decision and submission to accept) and the ‘Journal Impact Data’ (2-year Impact Factor; 5-year Impact Factor; Immediacy Index; Eigenfactor Score and Article Influence Score). The data related to ‘Peer Review Metrics’ (submission to first editorial decision; submission to first post-review decision and submission to accept) and ‘Journal Impact Data’ (2-year Impact Factor; 5-year Impact Factor; Immediacy Index; Eigenfactor Score and Article Influence Score) were downloaded from the ‘Nature Research’ journals website https://www.nature.com/nature-portfolio/about/journal-metrics . Accordingly, correlations were drawn between the ‘Peer Review Metrics’ and the ‘Journal Impact Data’. If the time from ‘submission to first editorial decision’ decreases, the ‘Journal Impact Data’ increases and vice versa. However, an increase or decrease in the time from ‘submission to first editorial decision’ does not affect the ‘Eigenfactor Score’ of the journal and vice versa. An increase or decrease in the time from ‘submission to first post-review decision’ does not affect any ‘Journal Impact Data’ and vice versa. If the time from ‘submission to acceptance’ increases, the ‘Journal Impact Data’ (2-year Impact Factor, 5-year Impact Factor, Immediacy Index and Article Influence Score) also increases, and if the time from ‘submission to acceptance’ decreases, so will the ‘Journal Impact Data’. However, an increase or decrease in the time from ‘submission to acceptance’ does not affect the ‘Eigenfactor Score’ of the journal and vice versa. The study will act as a ready reference tool for the scholars to select the most appropriate submitting platforms for their scholarly endeavours. Furthermore, the performance and evaluative indicators responsible for a journal’s overall research performance can also be understood from a micro-analytical view, which will help the researchers select appropriate journals for their future scholarly submissions. Lengthy publication timelines are a big problem for the researchers because they are not able to get the credit for their research on time. Since the study validates a relationship between the ‘Peer Review Metrics’ and ‘Journal Impact Data’, the findings will be of great help in making an appropriate journal’s choice. The study can be an eye opener for the journal administrators who vocalise a speed-up publication process by enhancing certain areas of publication timeline. The study is the first of its kind that correlates the ‘Peer Review Metrics’ of the journals and the ‘Journal Impact Data’. The study’s findings are limited to the data retrieved from the ‘Nature Research’ journals and cannot be generalised to the full score of journals. The study can be extended across other publishers to generalise the findings. Even the articles’ early access availability concerning ‘Peer Review Metrics’ of the journals and the ‘Journal Impact Data’ can be studied.

2013 ◽  
Vol 08 (01) ◽  
pp. 1350005 ◽  
Author(s):  
CHIA-LIN CHANG ◽  
MICHAEL MCALEER

Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business–Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al., 2011a,b,c). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance per Paper Online), h-index, PI-BETA (Papers Ignored — By even the Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analyzed.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


2021 ◽  
Vol 5 ◽  
pp. 239821282110065
Author(s):  
Joseph Clift ◽  
Anne Cooke ◽  
Anthony R. Isles ◽  
Jeffrey W. Dalley ◽  
Richard N. Henson

Brain and Neuroscience Advances has grown in tandem with the British Neuroscience Association’s campaign to build Credibility in Neuroscience, which encourages actions and initiatives aimed at improving reproducibility, reliability and openness. This commitment to credibility impacts not only what the Journal publishes, but also how it operates. With that in mind, the Editorial Board sought the views of the neuroscience community on the peer review process, and on how they should respond to the Journal Impact Factor that will be assigned to Brain and Neuroscience Advances. In this editorial, we present the results of a survey of neuroscience researchers conducted in the autumn of 2020 and discuss the broader implications of our findings for the Journal and the neuroscience community.


2017 ◽  
Vol 15 (3-4) ◽  
pp. 1-11
Author(s):  
Jaime A. Teixeira da Silva ◽  
Aceil Al-Khatib

Without peer reviewers, the entire scholarly publishing system as we currently know it would collapse. However, as it currently stands, publishing is an extremely exploitative system, relative to other business models, in which trained and specialized labor is exploited, in the form of editors and peer reviewers, primarily by for-profit publishers, in return for a pat on the back, and a public nod of thanks. This is the “standardized” and “accepted” form for deriving mainstream peer reviewed literature. However, except for open peer review, where reports are open and identities are known, traditional peer review is closed, and the content of peer reports is known only to the authors and editors involved. Publons launched in 2012 as a platform that would offer recognition to peer reviewers for their work. In 2016, Publons rewarded the most productive reviewers with a “Sentinels of Science” award, accompanied by a dismal monetary reward (38 US cents/review) for their efforts. A site aimed at registering pre- and post-publication peer efforts, Publons was perceived as a positive step towards a more transparent peer review system. However, the continued presence of fake peer reviews and a spike in retractions, even among publishers that were Publons sponsors, suggests that perhaps peers may be exploiting Publons to get recognition for superficial or poor peer review. Since all reviews are not public, their content and quality cannot be verified. On 1 June 2017, ClarivateTM Analytics, which owns the journal impact factor—most likely the most gamed non-academic factor in academic publishing—which is a measure of the number of citations of papers in journals, many of which are published by the for-profit publishers—including Publons sponsors—that “employ” free peer reviewers to quality check the literature they then sell for profit, purchased Publons. Touting the purchase as a way to increase transparency, and stamp out fake peer review, some who had supported Publons felt betrayed, even cancelling their Publons accounts immediately when learning of this purchase. Their concerns included the possible “gaming” of peer review as had taken place with the journal impact factor. This commentary examines possible positive and negative aspects of this business transaction, and what it might mean to academics and publishers.


2021 ◽  
pp. 1-35
Author(s):  
Teresa Schultz

Abstract The goal of the open access (OA) movement is to help everyone access the scholarly research, not just those who can afford to. However, most studies looking at whether OA has met this goal have focused on whether other scholars are making use of OA research. Few have considered how the broader public, including the news media, uses OA research. This study sought to answer whether the news media mentions OA articles more or less than paywalled articles by looking at articles published from 2010 through 2018 in journals across all four quartiles of the Journal Impact Factor using data obtained through Altmetric.com and the Web of Science. Gold, green and hybrid OA articles all had a positive correlation with the number of news mentions received. News mentions for OA articles did see a dip in 2018, although they remained higher than those for paywalled articles. Peer Review https://publons.com/publon/10.1162/qss_a_00139


Sign in / Sign up

Export Citation Format

Share Document