scholarly journals Five years post-DORA: promoting best practices for research assessment

2017 ◽  
Vol 28 (22) ◽  
pp. 2941-2944 ◽  
Author(s):  
Sandra L. Schmid

The San Francisco Declaration on Research Assessment (DORA) was penned 5 years ago to articulate best practices for how we communicate and judge our scientific contributions. In particular, it adamantly declared that Journal Impact Factor (JIF) should never be used as a surrogate measure of the quality of individual research contributions, or for hiring, promotion, or funding decisions. Since then, a heightened awareness of the damaging practice of using JIFs as a proxy for the quality of individual papers, and to assess an individual’s or institution’s accomplishments has led to changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research. Herein I summarize the considerable progress made and remaining challenges that must be met to ensure a fair and meritocratic approach to research assessment and the advancement of research.

2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


2016 ◽  
Author(s):  
Vincent Larivière ◽  
Véronique Kiermer ◽  
Catriona J. MacCallum ◽  
Marcia McNutt ◽  
Mark Patterson ◽  
...  

AbstractAlthough the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of these distributions and the variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


eLife ◽  
2013 ◽  
Vol 2 ◽  
Author(s):  
Randy Schekman ◽  
Mark Patterson

It is time for the research community to rethink how the outputs of scientific research are evaluated and, as the San Francisco Declaration on Research Assessment makes clear, this should involve replacing the journal impact factor with a broad range of more meaningful approaches.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2017 ◽  
Vol 402 (7) ◽  
pp. 1015-1022 ◽  
Author(s):  
Usama Ahmed Ali ◽  
Beata M. M. Reiber ◽  
Joren R. ten Hove ◽  
Pieter C. van der Sluis ◽  
Hein G. Gooszen ◽  
...  

NEMESIS ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 18
Author(s):  
Aleksandra Hebda ◽  
Guillaume A Odri ◽  
Raphael Olszewski

Objective: to develop and test inter-observer reproducibility of instructions for authors quality rating (IAQR) tool measuring the quality of instructions for authors at journal level for a possible improvement of editorial guidelines.Material and methods: instructions for authors of 75 dental and maxillofacial surgery journals were assessed by two independent observers using assessment tool inspired from AGREE with 16 questions and 1 to 4 points scale per answer. Two observers evaluated the instructions of authors independently and blind to impact factor of a given journal. Scores obtained from our tool were compared with “journal impact factor 2013”. Results: IAQR presented with an excellent interobserver reproducibility (κ= 0.81) despite a difference in data distribution between observers. There existed a weak positive correlation between IAQR and “journal impact factor 2013”. Conclusions: The IAQR is a reproducible quality assessment tool at the journal level. The IAQR assess the quality of instruction for authors and it is a goodstarting point for possible improvements of the instructions for authors, especially when it comes to their completeness. Nemesis relevance: 28% of dental and maxillofacial journals might revise their instructions for authors to provide more up-to-date version.


2019 ◽  
Author(s):  
Miguel Abambres ◽  
Tiago Ribeiro ◽  
Ana Sousa ◽  
Eva Olivia Leontien Lantsoght

‘If there is one thing every bibliometrician agrees, is that you should never use the journal impact factor (JIF) to evaluate research performance for an article or an individual – that is a mortal sin’. Few sentences could define so precisely the uses and misuses of the Journal Impact Factor (JIF) better than Anthony van Raan’s. This manuscript presents a critical overview on the international use, by governments and institutions, of the JIF and/or journal indexing information for individual research quality assessment. Interviews given by Nobel Laureates speaking on this matter are partially illustrated in this work. Furthermore, the authors propose complementary and alternative versions of the journal impact factor, respectively named Complementary (CIF) and Timeless (TIF) Impact Factors, aiming to better assess the average quality of a journal – never of a paper or an author. The idea behind impact factors is not useless, it has just been misused.


Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Steven Peters ◽  
Aaron Switzer ◽  
Shivanand Patil ◽  
Cheryl R McCreary ◽  
Martin Dichgans ◽  
...  

Introduction: The quality of reporting of neuroimaging methods for studies of cerebral small vessel disease is unknown. We systematically reviewed studies of MRI white matter hyperintensities (WMH) of vascular origin to determine the frequency of reporting of key aspects of neuroimaging methods, and whether reporting varied by sample size, study design or journal impact factor. Methods: Three raters independently reviewed 100 consecutive papers reporting WMH severity, either as a primary outcome or covariate, to abstract 50 study characteristics based on the published STRIVE standards (Wardlaw et al Lancet Neurol 2013). Final determinations were made by consensus. An aggregate quality score (range 0-11) was created by adding one point for reporting of each of 11 key characteristics (Table). Spearman correlation or chi-square test, as appropriate, were used to test associations with quality score. Results: Papers were published between 2009 and 2013 with journal impact factors ranging from 0.56 to 15.3, with cohort (79%) and case control (21%) studies represented. Quantitative computational methods were used in 28 studies. MR field strength, MRI sequence types, type of WMH measurement method, blinding and number of raters were reported frequently, but reporting of other characteristics was inconsistent (Table). Study quality score was not correlated with journal impact factor, sample size or cohort study design. Conclusions: There is inconsistent reporting of neuroimaging methods in the small vessel disease imaging literature. Increased adherence to published reporting standards, such as the STRIVE criteria, may facilitate more objective peer review of submitted manuscripts and increase the reproducibility of published results. More work is needed to facilitate adoption of standards and checklists by authors, reviewers and editors.


Sign in / Sign up

Export Citation Format

Share Document