Editorial Board Position Statement Regarding theDeclaration on Research Assessment(DORA) Recommendations With Respect to Journal Impact Factors

2014 ◽  
Vol 85 (4) ◽  
pp. 429-430 ◽  
Author(s):  
Gregory Welk ◽  
Mark G. Fischman ◽  
Christy Greenleaf ◽  
Louis Harrison ◽  
Lynda Ransdell ◽  
...  
2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 23 (2) ◽  
pp. 47-51
Author(s):  
Morwenna Senior ◽  
Seena Fazel

Metrics which quantify the impact of a scientist are increasingly incorporated into decisions about how to rate and fund individuals and institutions. Several commonly used metrics, based on journal impact factors and citation counts, have been criticised as they do not reliably predict real-world impact, are highly variable between fields and are vulnerable to gaming. Bibliometrics have been incorporated into systems of research assessment but these may create flawed incentives, failing to reward research that is validated, reproducible and with wider impacts. A recent proposal for a new standardised citation metric based on a composite indicator of 6 measures has led to an online database of 100 000 of the most highly cited scientists in all fields. In this perspective article, we provide an overview and evaluation of this new citation metric as it applies to mental health research. We provide a summary of its findings for psychiatry and psychology, including clustering in certain countries and institutions, and outline some implications for mental health research. We discuss strengths and limitations of this new metric, and how further refinements could align impact metrics more closely with wider goals of scientific research.


2009 ◽  
Vol 24 (S1) ◽  
pp. 1-1
Author(s):  
P.N. van Harten ◽  
H.W. Hoek

The European portal www.psychiatrynet.eu offers a selection of high quality links for the practising psychiatrist, but also for researchers in psychiatry. The editorial board has selected websites for 16 categories of psychiatric disorders, following the DSM system. There are also links related to treatment, specific areas of interest and more general categories.For example, if your are looking for guidelines for the treatment of schizophrenia, you will find a brief (3-6 lines) description of the NICE and APA guidelines and a link to these guidelines (in PDF) in the category psychotic disorders. You can search on a wide variety of relevant and checked information, like what are the international organisations on eating disorders or on specific substance abuse. If you want to know the names of the members of the DSM-V workgroups or the journal impact factors 2007, you can find it under general categories or by using the search function of www.psychiatrynet.eu.


2019 ◽  
Vol 3 ◽  
pp. 13 ◽  
Author(s):  
Vishnu Chandra ◽  
Neil Jain ◽  
Pratik Shukla ◽  
Ethan Wajswol ◽  
Sohail Contractor ◽  
...  

Objectives: The integrated interventional radiology (IR) residency has only been established relatively recently as compared to other specialties. Although some preliminary information is available based on survey data five, no comprehensive bibliometric analysis documenting the importance of the quantity and quality of research in applying to an integrated-IR program currently exists. As the first bibliometric analysis of matched IR residents, the data obtained from this study fills a gap in the literature. Materials and Methods: A list of matched residents from the 2018 integrated-IR match were identified by contacting program directors. The Scopus database was used to search for resident research information, including total publications, first-author publications, radiology-related publications, and h-indices. Each matriculating program was categorized into one of five tiers based on the average faculty Hirsch index (h-index). Results: Sixty-three programs and 117 matched residents were identified and reviewed on the Scopus database. For the 2018 cycle, 274 total publications were produced by matched applicants, with a mean of 2.34 ± 0.41 publication per matched applicant. The average h-index for matched applicants was 0.96 ± 0.13. On univariate analysis, the number of radiology-related publications, highest journal impact factor, and h-index were all associated with an increased likelihood of matching into a higher tier program (P < 0.05). Other research variables displayed no statistical significance. All applicants with PhDs matched into tier one programs. Conclusions: Research serves as an important element in successfully matching into an integrated-IR residency. h-index, number of radiology-related manuscripts, and highest journal impact factors are all positively associated with matching into a higher tier program.


Author(s):  
Brendan Luyt

This paper argues that the rise of the JIF is a result of the perceived value of quantification measures in modern society and the restructuring of capitalism. Two key implications of this acceptance are explored: an increase in global academic dependency and a lessening of autonomy in the scientific field.Cet article défend la thèse que la montée du FIRS est le résultat de la valeur perçue des mesures de quantification de la société moderne et de la restructuration du capitalisme. Seront explorées deux conséquences importantes de cette acceptation : une augmentation de la dépendance globale du milieu universitaire et une perte d'autonomie du milieu de la science. 


2018 ◽  
Vol 50 (1) ◽  
pp. 26-36 ◽  
Author(s):  
Igor Fischer ◽  
Hans-Jakob Steiger

2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2001 ◽  
Vol 51 (2) ◽  
pp. 111-117 ◽  
Author(s):  
A. Rostami-Hodjegan ◽  
G.T. Tucker

Sign in / Sign up

Export Citation Format

Share Document