The Eigenfactor™ Score in Highly Specific Medical Fields

2012 ◽  
Vol 91 (4) ◽  
pp. 329-333 ◽  
Author(s):  
A. Sillet ◽  
S. Katsahian ◽  
H. Rangé ◽  
S. Czernichow ◽  
P. Bouchard

We sought to compare the Eigenfactor Score™ journal rank with the journal Impact Factor over five years, and to identify variables that may influence the ranking differences between the two metrics. Datasets were retrieved from the Thomson Reuters® and Eigenfactor Score™ Web sites. Dentistry was identified as the most specific medical specialty. Variables were retrieved from the selected journals to be included in a regression linear model. Among the 46 dental journals included in the analysis, striking variations in ranks were observed according to the metric used. The Bland-Altman plot showed a poor agreement between the metrics. The multivariate analysis indicates that the number of original research articles, the number of reviews, the self-citations, and the citing time may explain the differences between ranks. The Eigenfactor Score™ seems to better capture the prestige of a journal than the Impact Factor. In medicine, the bibliometric indicators should focus not only on the overall medical field but also on specialized disciplinary fields. Distinct measures are needed to better describe the scientific impact of specialized medical publications.

2015 ◽  
Vol 134 (1) ◽  
pp. 74-78 ◽  
Author(s):  
Renan Moritz Varnier Rodrigues de Almeida ◽  
Fernanda Catelani ◽  
Aldo José Fontes-Pereira ◽  
Nárrima de Souza Gave

CONTEXT AND OBJECTIVE: Increased frequency of retractions has recently been observed, and retractions are important events that deserve scientific investigation. This study aimed to characterize cases of retraction within general and internal medicine in a high-profile database, with interest in the country of origin of the article and the impact factor (IF) of the journal in which the retraction was made. DESIGN AND SETTING: This study consisted of reviewing retraction notes in the Thomson-Reuters Web of Knowledge (WoK) indexing database, within general and internal medicine. METHODS: The retractions were classified as plagiarism/duplication, error, fraud and authorship problems and then aggregated into two categories: "plagiarism/duplication" and "others." The countries of origin of the articles were dichotomized according to the median of the indicator "citations per paper" (CPP), and the IF was dichotomized according to its median within general and internal medicine, also obtained from the WoK database. These variables were analyzed using contingency tables according to CPP (high versus low), IF (high versus low) and period (1992-2002 versus 2003-2014). The relative risk (RR) and 95% confidence interval (CI) were estimated for plagiarism/duplication. RESULTS: A total of 86 retraction notes were identified, and retraction reasons were found for 80 of them. The probability that plagiarism/duplication was the reason for retraction was more than three times higher for the low CPP group (RR: 3.4; 95% CI: [1.9-6.2]), and similar results were seen for the IF analysis. CONCLUSION: The study identified greater incidence of plagiarism/duplication among retractions from countries with lower scientific impact.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


2012 ◽  
Vol 17 (6) ◽  
pp. 1629-1634 ◽  
Author(s):  
Adriana Luchs

In the last few years, bibliometric studies have proliferated, seeking to provide data on world research. This study analyzes the profile of the Brazilian scientific production in the A (H1N1) influenza field between 2009 and 2011. The research was conducted in MEDLINE, SciELO and LILACS databases, selecting papers in which the term "H1N1" and "Brazil" were defined as the main topics. The data were analyzed taking into consideration the Brazilian state and institution in which the articles were produced, the impact factor of the journal and the language. The research revealed 40 documents (27 from MEDLINE, 16 from SciELO and 24 from LILACS). The journal impact factor ranged from 0.0977 to 8.1230. A similar amount of articles were written in English and Portuguese and São Paulo was the most productive state in the country, with 95% of the Brazilian production originating from the Southern and Southeastern regions. Linguistic data indicate that previous efforts made in order to improve the scientific production of Brazilian researchers making their observations attain a broader scientific audience produced results. It is necessary to assess the scientific studies, especially those conducted with public funds, in order to ensure that the results will benefit society.


2016 ◽  
Vol 42 (4) ◽  
pp. 324-337 ◽  
Author(s):  
Chia-Lin Chang ◽  
Michael McAleer

Purpose – Both journal self-citations and exchanged citations have the effect of increasing a journal’s impact factor, which may be deceptive. The purpose of this paper is to analyse academic journal quality and research impact using quality-weighted citations vs total citations, based on the widely used Thomson Reuters ISI Web of Science citations database (ISI). A new Index of Citations Quality (ICQ) is presented, based on quality-weighted citations. Design/methodology/approach – The new index is used to analyse the leading 500 journals in both the sciences and social sciences, as well as finance and accounting, using quantifiable Research Assessment Measures (RAMs) that are based on alternative transformations of citations. Findings – It is shown that ICQ is a useful additional measure to 2-year impact factor (2YIF) and other well-known RAMs for the purpose of evaluating the impact and quality, as well as ranking, of journals as it contains information that has very low correlations with the information contained in the well-known RAMs for both the sciences and social sciences, and finance and accounting. Practical implications – Journals can, and do, inflate the number of citations through self-citation practices, which may be coercive. Another method for distorting journal impact is through a set of journals agreeing to cite each other, that is, by exchanging citations. This may be less coercive than self-citations, but is nonetheless unprofessional and distortionary. Social implications – The premise underlying the use of citations data is that higher quality journals generally have a higher number of citations. The impact of citations can be distorted in a number of ways, both consciously and unconsciously. Originality/value – Regardless of whether self-citations arise through collusive practices, the increase in citations will affect both 2YIF and 5-year impact factor (5YIF), though not Eigenfactor and Article Influence. This leads to an ICQ, where a higher ICQ would generally be preferred to lower. Unlike 5YIF, which is increased by journal self-citations and exchanged citations, and Eigenfactor and Article Influence, both of which are affected by quality-weighted exchanged citations, ICQ will be less affected by exchanged citations. In the absence of any empirical evidence to the contrary, 5YIF and AI are assumed to be affected similarly by exchanged citations.


2013 ◽  
Vol 51 (1) ◽  
pp. 173-189 ◽  
Author(s):  
David I Stern

Academic economists appear to be intensely interested in rankings of journals, institutions, and individuals. Yet there is little discussion of the uncertainty associated with these rankings. To illustrate the uncertainty associated with citations-based rankings, I compute the standard error of the impact factor for all economics journals with a five-year impact factor in the 2011 Journal Citations Report. I use these to derive confidence intervals for the impact factors as well as ranges of possible rank for a subset of thirty journals. I find that the impact factors of the top two journals are well defined and set these journals apart in a clearly defined group. An elite group of 9–11 mainstream journals can also be fairly reliably distinguished. The four bottom ranked journals are also fairly clearly set apart. For the remainder of the distribution, confidence intervals overlap and rankings are quite uncertain. (JEL A14)


2012 ◽  
Vol 92 (2) ◽  
pp. 395-401 ◽  
Author(s):  
David A. Pendlebury ◽  
Jonathan Adams

2019 ◽  
Vol 124 (12) ◽  
pp. 1718-1724 ◽  
Author(s):  
Tobias Opthof

In this article, I show that the distribution of citations to papers published by the top 30 journals in the category Cardiac & Cardiovascular Systems of the Web of Science is extremely skewed. This skewness is to the right, which means that there is a long tail of papers that are cited much more frequently than the other papers of the same journal. The consequence is that there is a large difference between the mean and the median of the citation of the papers published by the journals. I further found that there are no differences between the citation distributions of the top 4 journals European Heart Journal , Circulation , Journal of the American College of Cardiology , and Circulation Research . Despite the fact that the journal impact factor (IF) varied between 23.425 for Eur Heart J and 15.211 for Circ Res with the other 2 journals in between, the median citation of their articles plus reviews (IF Median) was 10 for all 4 journals. Given the fact that their citation distributions were similar, it is obvious that an indicator (IF Median) that reflects this similarity must be superior to the classical journal impact factor, which may indicate a nonexisting difference. It is underscored that the IF Median is substantially lower than the journal impact factor for all 30 journals under consideration in this article. Finally, the IF Median has the additional advantage that there is no artificial ranking of 128 journals in the category but rather an attribution of journals to a limited number of classes with comparable impact.


2020 ◽  
Vol 13 (5) ◽  
pp. 723-727
Author(s):  
Alberto Ortiz

Abstract The Clinical Kidney Journal (ckj) impact factor from Clarivate’s Web of Science for 2019 was 3.388. This consolidates ckj among journals in the top 25% (first quartile, Q1) in the Urology and Nephrology field according to the journal impact factor. The manuscripts contributing the most to the impact factor focused on chronic kidney disease (CKD) epidemiology and evaluation, CKD complications and their management, cost-efficiency of renal replacement therapy, pathogenesis of CKD, familial kidney disease and the environment–genetics interface, onconephrology, technology, SGLT2 inhibitors and outcome prediction. We provide here an overview of the hottest and most impactful topics for 2017–19.


2013 ◽  
Vol 08 (01) ◽  
pp. 1350005 ◽  
Author(s):  
CHIA-LIN CHANG ◽  
MICHAEL MCALEER

Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business–Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al., 2011a,b,c). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance per Paper Online), h-index, PI-BETA (Papers Ignored — By even the Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analyzed.


2010 ◽  
Vol 106 (3) ◽  
pp. 891-900 ◽  
Author(s):  
Nick Haslam ◽  
Peter Koval

The citation impact of a comprehensive sample of articles published in social and personality psychology journals in 1998 was evaluated. Potential predictors of the 10-yr. citation impact of 1,580 articles from 37 journals were investigated, including number of authors, number of references, journal impact factor, author nationality, and article length, using linear regression. The impact factor of the journal in which articles appeared was the primary predictor of the citations that they accrued, accounting for 30% of the total variance. Articles with greater length, more references, and more authors were cited relatively often, although the citation advantage of longer articles was not proportionate to their length. A citation advantage was also enjoyed by authors from the United States of America, Canada, and the United Kingdom. 37% of the variance in the total number of citations was accounted for by the study variables.


Sign in / Sign up

Export Citation Format

Share Document