On Impact Factors and Research Assessment. At the Start of Volume 31 of Telematics and Informatics

2014 ◽  
Vol 31 (1) ◽  
pp. 1-2 ◽  
Author(s):  
Jan Servaes
2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 23 (2) ◽  
pp. 47-51
Author(s):  
Morwenna Senior ◽  
Seena Fazel

Metrics which quantify the impact of a scientist are increasingly incorporated into decisions about how to rate and fund individuals and institutions. Several commonly used metrics, based on journal impact factors and citation counts, have been criticised as they do not reliably predict real-world impact, are highly variable between fields and are vulnerable to gaming. Bibliometrics have been incorporated into systems of research assessment but these may create flawed incentives, failing to reward research that is validated, reproducible and with wider impacts. A recent proposal for a new standardised citation metric based on a composite indicator of 6 measures has led to an online database of 100 000 of the most highly cited scientists in all fields. In this perspective article, we provide an overview and evaluation of this new citation metric as it applies to mental health research. We provide a summary of its findings for psychiatry and psychology, including clustering in certain countries and institutions, and outline some implications for mental health research. We discuss strengths and limitations of this new metric, and how further refinements could align impact metrics more closely with wider goals of scientific research.


2014 ◽  
Vol 85 (4) ◽  
pp. 429-430 ◽  
Author(s):  
Gregory Welk ◽  
Mark G. Fischman ◽  
Christy Greenleaf ◽  
Louis Harrison ◽  
Lynda Ransdell ◽  
...  

2014 ◽  
Vol 9 (1) ◽  
pp. 56 ◽  
Author(s):  
Mathew Lee Stone

Objective – To quantify the value of librarianship and information science (LIS) exports knowledge to other subject disciplines. Design – Bibliometric study. Setting – LIS departments in U.K. universities. Subjects – 232 LIS research articles published between 2001 and 2007. Methods – Data from the 2008 U.K. Research Assessment Exercise were checked to identify 405 research articles submitted by 10 selected university departments (out of a total of 21), which submitted research in the LIS category. The Web of Science database was then searched to see how many of these articles had been cited in other articles (n=232). If the citing article was published in a non-LIS journal it was considered a knowledge export. Journals were defined as non-LIS if they had not been assigned the subject category of Information Science & Library Science by the Journal of Citation Reports. The journal Impact Factors (IFs) of citing journals were then normalized to measure the value of individual knowledge exports to their respective subject disciplines. This was done by comparing a citing journal’s IF with the median journal IF within that subject category. If the citing journal’s IF was above this median it was considered to be a valuable knowledge export. Main Results – The sample of LIS research articles produced a total of 1,061 knowledge exports in 444 unique non-LIS journals. These non-LIS journals covered 146 unique subject categories of which those related to computer science and chemistry/pharmacology cited LIS research with the greatest frequency. Just over three-quarters (n=798) of these citations were considered to be valuable knowledge exports. A sub-analysis showed that LIS articles published in non-LIS journals were significantly more valuable than the knowledge exports published in LIS journals. Conclusion – The validity of bibliometric studies can be improved by adopting the two methodological innovations presented in this study. The first innovation is to avoid over-estimating the number of knowledge exports by discounting “part exports” (i.e., where the citing journal is assigned to multiple subject categories, one of which includes the same as that of the cited reference). The second innovation introduced by this study is to add an extra dimension to the analysis by measuring the value of each knowledge export by taking into account the “normalized” impact factor of citing journals.


2007 ◽  
Vol 89 (3) ◽  
pp. 292-297 ◽  
Author(s):  
N Bhasin ◽  
DJA Scott

BACKGROUND The Vascular Society of Great Britain and Ireland (VSGBI) annual meeting is a major international vascular surgery conference. Studies suggest that the percentage of presentations that result in full-text publications are a measure of the quality of the meeting. We investigated the publication outcome of abstracts presented to the VSGBI in 2001 and 2002. MATERIALS AND METHODS We retrospectively identified abstracts from the conference programmes and conducted a detailed electronic Medline and PubMed search to determine publication. We collected data regarding the study design, subject matter, publishing journal, time to publication, institution of origin, impact factors and RAE levels. RESULTS There were 63 publications from 106 abstracts (59.4%), with a median impact factor of 3.507. Prospective observational studies accounted for 20.6% of publications, with abdominal aortic aneurysms being the commonest subject matter (34.9%). The median time to publication was 12 months, with the European Journal of Vascular and Endovascular Surgery publishing 33.3% of the articles. Leicester achieved the highest number of publications and the majority of work came from centres with Research Assessment Exercise (RAE) level scores of 4, university centres accounted for 74.6% of publications. CONCLUSIONS We conclude that when compared to equivalent meetings in other specialties and geographical regions, the annual meeting of the VSGBI is of the very highest quality.


2022 ◽  
Author(s):  
Olivier Pourret ◽  
Dasapta Erwin Irawan ◽  
Najmeh Shaghaei ◽  
Elenora M. van Rijsingen ◽  
Lonni Besançon

Science's success and effect measures are built on a system that prioritizes citations and impact factors. These measurements are inaccurate and biased against already under-represented groups, and they fail to convey the range of individuals' significant scientific contributions, especially open science. We argue for a transition in this out-of-date value system that promotes science by promoting diversity, equity, and inclusion. To achieve systemic change, it will necessitate a concerted effort led by academic leaders and administrators.


2012 ◽  
Vol 17 (3) ◽  
pp. 190-198 ◽  
Author(s):  
Günter Krampen ◽  
Thomas Huckert ◽  
Gabriel Schui

Exemplary for other than English-language psychology journals, the impact of recent Anglicization of five former German-language psychology journals on (1) authorship (nationality, i.e., native language, and number of authors, i.e., single or multiple authorships), (2) formal characteristics of the journal (number of articles per volume and length of articles), and (3) number of citations of the articles in other journal articles, the language of the citing publications, and the impact factors (IF) is analyzed. Scientometric data on these variables are gathered for all articles published in the four years before anglicizing and in the four years after anglicizing the same journal. Results reveal rather quick changes: Citations per year since original articles’ publication increase significantly, and the IF of the journals go up markedly. Frequencies of citing in German-language journals decrease, citing in English-language journals increase significantly after the Anglicization of former German-language psychology journals, and there is a general trend of increasing citations in other languages as well. Side effects of anglicizing former German-language psychology journals include the publication of shorter papers, their availability to a more international authorship, and a slight, but significant increase in multiple authorships.


1980 ◽  
Vol 35 (11) ◽  
pp. 1048-1049 ◽  
Author(s):  
Leighton E. Stamps ◽  
Lawrence A. Fehr
Keyword(s):  

IEE Review ◽  
1993 ◽  
Vol 39 (2) ◽  
pp. 75
Author(s):  
John E. Midwinter
Keyword(s):  

Phlebologie ◽  
1998 ◽  
Vol 27 (03) ◽  
pp. 77-83
Author(s):  
A. Finzen

ZusammenfassungWissenschaftliche Leistungen leben von der Originalität ihrer Urheber. Der Ver-such, sie zu quantifizieren, erscheint als Widerspruch in sich. Um so irritierender ist der Siegeszug des sogenannten Impact Factors, eines Konstrukts des amerikanischen Institute of Scientific Information (ISI), das den Anspruch stellt, über die Häufigkeit der Zitierung von – vom ISI erfaßten – Zeitschriften das Gewicht der in diesen publizierenden Wissenschaftler zu messen. Seit naturwissenschaftliche und medizinische Forschungseinrichtungen und Fakultäten dazu übergehen, den Impact Factor zur Grundlage für die Verteilung von Forschungsgeldern und zur Guillotine für wissenschaftliche Karrieren zu machen, schickt er sich an, die internationale Wissenschaftskultur zu verändern. Deshalb ist es an der Zeit, daß die Öffentlichkeit dieses Zeitgeistphänomen zur Kenntnis nimmt und sich mit seinen Folgen auseinandersetzt.


Sign in / Sign up

Export Citation Format

Share Document