scholarly journals Academic sell-out: How an obsession with metrics and rankings is damaging academia

2016 ◽  
pp. 161-172
Author(s):  
Thorsten Gruber

Increasingly, academics have to demonstrate that their research has academic impact. Universities normally use journal rankings and journal impact factors to assess the research impact of individual academics. More recently, citation counts for individual articles and the h-index have also been used to measure the academic impact of academics. There are, however, several serious problems with relying on journal rankings, journal impact factors and citation counts. For example, articles without any impact may be published in highly ranked journals or journals with high impact factor, whereas articles with high impact could be published in lower ranked journals or journals with low impact factor. Citation counts can also be easily gamed and manipulated and the h-index disadvantages early career academics. This paper discusses these and several other problems and suggests alternatives such as post-publication peer review and open-access journals.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jingda Ding ◽  
Ruixia Xie ◽  
Chao Liu ◽  
Yiqing Yuan

PurposeThis study distinguishes the academic influence of different papers published in journals of the same subject or field based on the modification of the journal impact factor.Design/methodology/approachTaking SSCI journals in library and information science (LIS) as the research object, the authors first explore the skewness degree of the citation distribution of journal articles. Then, we define the paper citation ratio as the weight of impact factor to modify the journal impact factor for the evaluation of papers, namely the weighted impact factor. The authors further explore the feasibility of the weighted impact factor in evaluating papers.FindingsThe research results show that different types of skewness exist in the citation distribution of journal papers. Particularly, 94% of journal paper citations are highly skewed, while the rest are moderately skewed. The weighted impact factor has a closer correlation with the citation frequency of papers than the journal impact factor. It resolves the issue that the journal impact factor tends to exaggerate the influence of low-cited papers in journals with high impact factors or weaken the influence of high-cited papers in journals with low impact factors.Originality/valueThe weighted impact factor is constructed based on the skewness of the citation distribution of journal articles. It provides a new method to distinguish the academic influence of different papers published in journals of the same subject or field, then avoids the situation that papers published in the same journal having the same academic impact.


2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


2020 ◽  
Author(s):  
Amanda Costa Araujo ◽  
Adriane Aver Vanin ◽  
Dafne Port Nascimento ◽  
Gabrielle Zoldan Gonzalez ◽  
Leonardo Oliveira Pena Costa

Abstract Background: Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. Methods: Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. Results: A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts, papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. Conclusion: Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention.Systematic Review registrations: Not applicable


2019 ◽  
Vol 3 ◽  
pp. 13 ◽  
Author(s):  
Vishnu Chandra ◽  
Neil Jain ◽  
Pratik Shukla ◽  
Ethan Wajswol ◽  
Sohail Contractor ◽  
...  

Objectives: The integrated interventional radiology (IR) residency has only been established relatively recently as compared to other specialties. Although some preliminary information is available based on survey data five, no comprehensive bibliometric analysis documenting the importance of the quantity and quality of research in applying to an integrated-IR program currently exists. As the first bibliometric analysis of matched IR residents, the data obtained from this study fills a gap in the literature. Materials and Methods: A list of matched residents from the 2018 integrated-IR match were identified by contacting program directors. The Scopus database was used to search for resident research information, including total publications, first-author publications, radiology-related publications, and h-indices. Each matriculating program was categorized into one of five tiers based on the average faculty Hirsch index (h-index). Results: Sixty-three programs and 117 matched residents were identified and reviewed on the Scopus database. For the 2018 cycle, 274 total publications were produced by matched applicants, with a mean of 2.34 ± 0.41 publication per matched applicant. The average h-index for matched applicants was 0.96 ± 0.13. On univariate analysis, the number of radiology-related publications, highest journal impact factor, and h-index were all associated with an increased likelihood of matching into a higher tier program (P < 0.05). Other research variables displayed no statistical significance. All applicants with PhDs matched into tier one programs. Conclusions: Research serves as an important element in successfully matching into an integrated-IR residency. h-index, number of radiology-related manuscripts, and highest journal impact factors are all positively associated with matching into a higher tier program.


Author(s):  
Gianfranco Pacchioni

This chapter discusses how performance is measured in science, such as through the role of citation metrics. Next, the chapter discusses the pros and cons of bibliometric indexes, and of ‘impact factor’, which was introduced by Eugene Garfield in 1955 but not widely used until twenty years later. The various ways that journals attempt to improve their impact factors, and how this will affect science, are also examined. Besides impact factor, the role played by indicators in evaluating scientists, such as the recently introduced h-index, is explored. Finally, fashions and trends in science are touched upon, illustrated with personal anecdotes from the author.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


Author(s):  
Susie Allard ◽  
Ali Andalibi ◽  
Patty Baskin ◽  
Marilyn Billings ◽  
Eric Brown ◽  
...  

Following up on recommendations from OSI 2016, this team will dig deeper into the question of developing and recommending new tools to repair or replace the journal impact factor (and/or how it is used), and propose actions the OSI community can take between now and the next meeting. What’s needed? What change is realistic and how will we get there from here?


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2013 ◽  
Vol 51 (1) ◽  
pp. 173-189 ◽  
Author(s):  
David I Stern

Academic economists appear to be intensely interested in rankings of journals, institutions, and individuals. Yet there is little discussion of the uncertainty associated with these rankings. To illustrate the uncertainty associated with citations-based rankings, I compute the standard error of the impact factor for all economics journals with a five-year impact factor in the 2011 Journal Citations Report. I use these to derive confidence intervals for the impact factors as well as ranges of possible rank for a subset of thirty journals. I find that the impact factors of the top two journals are well defined and set these journals apart in a clearly defined group. An elite group of 9–11 mainstream journals can also be fairly reliably distinguished. The four bottom ranked journals are also fairly clearly set apart. For the remainder of the distribution, confidence intervals overlap and rankings are quite uncertain. (JEL A14)


Sign in / Sign up

Export Citation Format

Share Document