scholarly journals “Worth(less) papers” – are journal impact factor and number of citations suitable indicators to evaluate quality of scientists?

2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.

2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


2020 ◽  
Vol 1 (1) ◽  
pp. 28-59 ◽  
Author(s):  
Kyle Siler ◽  
Koen Frenken

Open access (OA) publishing has created new academic and economic niches in contemporary science. OA journals offer numerous publication outlets with varying editorial philosophies and business models. This article analyzes the Directory of Open Access Journals (DOAJ) ( n = 12,127) to identify characteristics of OA academic journals related to the adoption of article processing charge (APC)-based business models, as well as the price points of journals that charge APCs. Journal impact factor (JIF), language, publisher mission, DOAJ Seal, economic and geographic regions of publishers, peer review duration, and journal discipline are all significantly related to the adoption and pricing of journal APCs. Even after accounting for other journal characteristics (prestige, discipline, publisher country), journals published by for-profit publishers charge the highest APCs. Journals with status endowments (JIF, DOAJ Seal) and articles written in English, published in wealthier regions, and in medical or science-based disciplines are also relatively costlier. The OA publishing market reveals insights into forces that create economic and academic value in contemporary science. Political and institutional inequalities manifest in the varying niches occupied by different OA journals and publishers.


2017 ◽  
Vol 15 (3-4) ◽  
pp. 1-11
Author(s):  
Jaime A. Teixeira da Silva ◽  
Aceil Al-Khatib

Without peer reviewers, the entire scholarly publishing system as we currently know it would collapse. However, as it currently stands, publishing is an extremely exploitative system, relative to other business models, in which trained and specialized labor is exploited, in the form of editors and peer reviewers, primarily by for-profit publishers, in return for a pat on the back, and a public nod of thanks. This is the “standardized” and “accepted” form for deriving mainstream peer reviewed literature. However, except for open peer review, where reports are open and identities are known, traditional peer review is closed, and the content of peer reports is known only to the authors and editors involved. Publons launched in 2012 as a platform that would offer recognition to peer reviewers for their work. In 2016, Publons rewarded the most productive reviewers with a “Sentinels of Science” award, accompanied by a dismal monetary reward (38 US cents/review) for their efforts. A site aimed at registering pre- and post-publication peer efforts, Publons was perceived as a positive step towards a more transparent peer review system. However, the continued presence of fake peer reviews and a spike in retractions, even among publishers that were Publons sponsors, suggests that perhaps peers may be exploiting Publons to get recognition for superficial or poor peer review. Since all reviews are not public, their content and quality cannot be verified. On 1 June 2017, ClarivateTM Analytics, which owns the journal impact factor—most likely the most gamed non-academic factor in academic publishing—which is a measure of the number of citations of papers in journals, many of which are published by the for-profit publishers—including Publons sponsors—that “employ” free peer reviewers to quality check the literature they then sell for profit, purchased Publons. Touting the purchase as a way to increase transparency, and stamp out fake peer review, some who had supported Publons felt betrayed, even cancelling their Publons accounts immediately when learning of this purchase. Their concerns included the possible “gaming” of peer review as had taken place with the journal impact factor. This commentary examines possible positive and negative aspects of this business transaction, and what it might mean to academics and publishers.


2020 ◽  
Author(s):  
Amanda Costa Araujo ◽  
Adriane Aver Vanin ◽  
Dafne Port Nascimento ◽  
Gabrielle Zoldan Gonzalez ◽  
Leonardo Oliveira Pena Costa

Abstract Background: Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. Methods: Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. Results: A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts, papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. Conclusion: Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention.Systematic Review registrations: Not applicable


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 40 (10) ◽  
pp. 1136-1142 ◽  
Author(s):  
Malke Asaad ◽  
Austin Paul Kallarackal ◽  
Jesse Meaike ◽  
Aashish Rajesh ◽  
Rafael U de Azevedo ◽  
...  

Abstract Background Citation skew refers to the unequal distribution of citations to articles published in a particular journal. Objectives We aimed to assess whether citation skew exists within plastic surgery journals and to determine whether the journal impact factor (JIF) is an accurate indicator of the citation rates of individual articles. Methods We used Journal Citation Reports to identify all journals within the field of plastic and reconstructive surgery. The number of citations in 2018 for all individual articles published in 2016 and 2017 was abstracted. Results Thirty-three plastic surgery journals were identified, publishing 9823 articles. The citation distribution showed right skew, with the majority of articles having either 0 or 1 citation (40% and 25%, respectively). A total of 3374 (34%) articles achieved citation rates similar to or higher than their journal’s IF, whereas 66% of articles failed to achieve a citation rate equal to the JIF. Review articles achieved higher citation rates (median, 2) than original articles (median, 1) (P < 0.0001). Overall, 50% of articles contributed to 93.7% of citations and 12.6% of articles contributed to 50% of citations. A weak positive correlation was found between the number of citations and the JIF (r = 0.327, P < 0.0001). Conclusions Citation skew exists within plastic surgery journals as in other fields of biomedical science. Most articles did not achieve citation rates equal to the JIF with a small percentage of articles having a disproportionate influence on citations and the JIF. Therefore, the JIF should not be used to assess the quality and impact of individual scientific work.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.


2017 ◽  
Vol 402 (7) ◽  
pp. 1015-1022 ◽  
Author(s):  
Usama Ahmed Ali ◽  
Beata M. M. Reiber ◽  
Joren R. ten Hove ◽  
Pieter C. van der Sluis ◽  
Hein G. Gooszen ◽  
...  

2020 ◽  
Author(s):  
John Antonakis ◽  
Nicolas Bastardoz ◽  
Philippe Jacquart

The impact factor has been criticized on several fronts, including that the distribution of citations to journal articles is heavily skewed. We nuance these critiques and show that the number of citations an article receives is significantly predicted by journal impact factor. Thus, impact factor can be used as a reasonably good proxy of article quality.


Sign in / Sign up

Export Citation Format

Share Document