A Journal-Level Analysis of Progress in Transplantation

2017 ◽  
Vol 28 (1) ◽  
pp. 19-23
Author(s):  
Thomas Feeley ◽  
Seyoung Lee ◽  
Shin-Il Moon

Context: Citations to articles published in academic journals represent a proxy for influence in bibliometrics. Objective: To measure the journal impact factor for Progress in Transplantation over time and to also identify related journals indexed in transplantation and surgery. Design: Data from Journal Citation Reports (ISI web of science) were used to rank Progress in Transplantation compared to peer journals using journal impact and journal relatedness measures. Social network analysis was used to measure relationships between pairs of journals in Progress in Transplantation’s relatedness network. Main Outcome Measures: Journal impact factor and journal relatedness. Results: Data from 2010 through 2015 indicate the average journal article in PIT was cited 0.87 times (standard deviation [SD] = 0.12) and this estimate was stable over time. Progress in Transplantation most often cited American Journal of Transplantation, Transplantation, American Journal of Kidney Diseases, and Liver Transplantation. In terms of cited data, the journal was most often referenced by Clinical Transplantation, Transplant International, and Current Opinion in Organ Transplantation. Conclusion: The journal is listed both in surgery and transplantation categories of Journal Citation Reports and its impact factors over time fare better with surgery journals than with transplant journals. Network data using betweenness centrality indicate Progress in Transplantation links transplantation-focused journals and journals indexed in health sciences categories.

2019 ◽  
Vol 24 (7) ◽  
pp. 1121-1123
Author(s):  
Zhi-Qiang Zhang

Journal impact factors for 2018 were recently announced by Clarivate Analytics in the June 2019 edition of Journal Citation Reports (JCR). In this editorial, I compared the impact factor of Systematic and Applied Acarology (SAA) with those of other main acarological journals as I did in Zhang (2017). Following Zhang (2018a), I also highlighted the top 10 SAA papers from 2016/2017 with the highest numbers of citations in 2018 (according to JCR June 2019 edition). In addition, I remarked on the increasing impact of developing countries and emerging markets in systematic and applied acarology, both in the number of publications and citations, and also include announcements of meetings on applied acarology.


Aquichan ◽  
2011 ◽  
Vol 11 (3) ◽  
pp. 245-255 ◽  
Author(s):  
Cayetano Fernández-Sola ◽  
José Granero-Molina ◽  
José Manuel Hernández-Padilla ◽  
Gabriel Aguilera-Manrique ◽  

Este artículo contiene un resumen de las críticas a la utilización del factor de impacto (FI) como indicador de calidad de las publicaciones y de producción de investigadores. Tales críticas alcanzan a los autores que intentan publicar en revistas con FI, argumentando que así renuncian a la propia identidad, primando su currículum sobre la utilidad de su investigación. En oposición a esas críticas se afirma que unos criterios de evaluación exigentes sirven de estímulo para la internacionalización del sistema científico. Existe consenso en la comunidad académica sobre las imperfecciones del FI y su aceptación como recurso válido y necesario para la evaluación científica, como también en que el debate identitario contribuye poco a resolver la invisibilidad internacional de la investigación de enfermería en español. Se esbozan propuestas que apuestan por aprovechar las fortalezas para incrementar y visibilizar dicha investigación, desarrollar estrategias para incluir y mantener a las revistas en español en el Journal Citation Reports (JCR), fomentar la formación y cooperación interdisciplinar, promover la publicación de investigaciones desarrolladas en los programas de posgrado, y reclamar la apuesta editorial por la indexación de sus revistas en el JCR. Se concluye que, aunque difícil, es posible aumentar la visibilidad de la producción científica de enfermería en español.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


Author(s):  
Susie Allard ◽  
Ali Andalibi ◽  
Patty Baskin ◽  
Marilyn Billings ◽  
Eric Brown ◽  
...  

Following up on recommendations from OSI 2016, this team will dig deeper into the question of developing and recommending new tools to repair or replace the journal impact factor (and/or how it is used), and propose actions the OSI community can take between now and the next meeting. What’s needed? What change is realistic and how will we get there from here?


2016 ◽  
Vol 1 ◽  
Author(s):  
J. Roberto F. Arruda ◽  
Robin Champieux ◽  
Colleen Cook ◽  
Mary Ellen K. Davis ◽  
Richard Gedye ◽  
...  

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The group’s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international “metrics lab” to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.OSI2016 Workshop Question: Impact FactorsTracking the metrics of a more open publishing world will be key to selling “open” and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reason—less visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?


2019 ◽  
Vol 40 (10) ◽  
pp. 1136-1142 ◽  
Author(s):  
Malke Asaad ◽  
Austin Paul Kallarackal ◽  
Jesse Meaike ◽  
Aashish Rajesh ◽  
Rafael U de Azevedo ◽  
...  

Abstract Background Citation skew refers to the unequal distribution of citations to articles published in a particular journal. Objectives We aimed to assess whether citation skew exists within plastic surgery journals and to determine whether the journal impact factor (JIF) is an accurate indicator of the citation rates of individual articles. Methods We used Journal Citation Reports to identify all journals within the field of plastic and reconstructive surgery. The number of citations in 2018 for all individual articles published in 2016 and 2017 was abstracted. Results Thirty-three plastic surgery journals were identified, publishing 9823 articles. The citation distribution showed right skew, with the majority of articles having either 0 or 1 citation (40% and 25%, respectively). A total of 3374 (34%) articles achieved citation rates similar to or higher than their journal’s IF, whereas 66% of articles failed to achieve a citation rate equal to the JIF. Review articles achieved higher citation rates (median, 2) than original articles (median, 1) (P < 0.0001). Overall, 50% of articles contributed to 93.7% of citations and 12.6% of articles contributed to 50% of citations. A weak positive correlation was found between the number of citations and the JIF (r = 0.327, P < 0.0001). Conclusions Citation skew exists within plastic surgery journals as in other fields of biomedical science. Most articles did not achieve citation rates equal to the JIF with a small percentage of articles having a disproportionate influence on citations and the JIF. Therefore, the JIF should not be used to assess the quality and impact of individual scientific work.


2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2019 ◽  
Vol 58 (2) ◽  
pp. 282-300
Author(s):  
Felicitas Hesselmann ◽  
Cornelia Schendzielorz

This contribution seeks to provide a more detailed insight into the entanglement of value and measurement. Drawing on insights from semiotics and a Bourdieusian perspective on language as an economy of linguistic exchange, we develop the theoretical concept of value-measurement links and distinguish three processes – operationalisation, nomination, and indetermination – as forms in which these links can be constructed. We illustrate these three processes using (e)valuation practices in science, particularly the journal impact factor, as an empirical object of investigation. As this example illustrates, measured values can function as building blocks for further measurements, and thus establish chains of evaluations, where it becomes more and more obscure which values the measurements actually express. We conclude that in the case of measured values such as impact factors, these chains are driven by the interplay between the interpretative openness of language and the seeming tendency of numbers to fixate meaning thus continually re-creating, transforming and modifying values.


2020 ◽  
pp. 104973152096377
Author(s):  
Monit Cheung ◽  
Patrick Leung

Purpose: With journal publishing being an important task for academicians, this article aims to help faculty and researchers increase their productivity by identifying journals with influential impacts on producing scientific knowledge. Method: Since 2004, the authors compiled and updated a journal list annually for social work faculty to use. This list aims to help faculty and researchers, including doctoral students, identify journals with significant scholarly impacts in social work and related fields for national and international recognition. Results: A total of 221 journals are included in the study, covering 44 social work journals with two indexes reported in the Journal Citation Reports® with Journal Impact Factor® and the h-index. Discussion: This list aims to help scholars find appropriate journals for article submissions. The criteria for the authors to select journals to be included in the publication list are also discussed.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


Sign in / Sign up

Export Citation Format

Share Document