What are the variables associated with Altmetric scores? – An overview of methodological reviews (Preprint)

2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A

2020 ◽  
Author(s):  
Amanda Costa Araujo ◽  
Adriane Aver Vanin ◽  
Dafne Port Nascimento ◽  
Gabrielle Zoldan Gonzalez ◽  
Leonardo Oliveira Pena Costa

Abstract Background: Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. Methods: Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. Results: A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts, papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. Conclusion: Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention.Systematic Review registrations: Not applicable


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Amanda Costa Araujo ◽  
Adriane Aver Vanin ◽  
Dafne Port Nascimento ◽  
Gabrielle Zoldan Gonzalez ◽  
Leonardo Oliveira Pena Costa

Abstract Background Social media has been used to disseminate the contents of scientific articles. To measure the impact of this, a new tool called Altmetric was created. Altmetric aims to quantify the impact of each article through online media. This systematic review aims to describe the associations between the publishing journal and published article variables and Altmetric scores. Methods Searches on MEDLINE, EMBASE, CINAHL, CENTRAL, and Cochrane Library were conducted. We extracted data related to both the publishing article and the publishing journal associated with Altmetric scores. The methodological quality of included articles was analyzed by the Appraisal Tool for Cross-sectional Studies. Results A total of 19 articles were considered eligible. These articles summarized a total of 573,842 studies. Citation counts, journal impact factor, access counts, papers published as open access, and press releases generated by the publishing journal were associated with Altmetric scores. The magnitude of these associations ranged from weak to strong. Conclusion Citation counts and journal impact factor are the most common variables associated with Altmetric scores. Other variables such as access counts, papers published in open access journals, and the use of press releases are also likely to be associated with online media attention. Systematic review registration This review does not contain health-related outcomes. Therefore, it is not eligible for registration.


2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.


2020 ◽  
Author(s):  
John Antonakis ◽  
Nicolas Bastardoz ◽  
Philippe Jacquart

The impact factor has been criticized on several fronts, including that the distribution of citations to journal articles is heavily skewed. We nuance these critiques and show that the number of citations an article receives is significantly predicted by journal impact factor. Thus, impact factor can be used as a reasonably good proxy of article quality.


2020 ◽  
Vol 49 (5) ◽  
pp. 35-58
Author(s):  
Matthias Templ

This article is motivated by the work as editor-in-chief of the Austrian Journal of Statistics and contains detailed analyses about the impact of the Austrian Journal of Statistics. The impact of a journal is typically expressed by journal metrics indicators. One of the important ones, the journal impact factor is calculated from the Web of Science (WoS) database by Clarivate Analytics. It is known that newly established journals or journals without membership in big publishers often face difficulties to be included, e.g., in the Science Citation Index (SCI) and thus they do not receive a WoS journal impact factor, as it is the case for example, for the Austrian Journal of Statistics. In this study, a novel approach is pursued modeling and predicting the WoS impact factor of journals using open access or partly open-access databases, like Google Scholar, ResearchGate, and Scopus. I hypothesize a functional linear dependency between citation counts in these databases and the journal impact factor. These functional relationships enable the development of a model that may allow estimating the impact factor for new, small, and independent journals not listed in SCI. However, only good results could be achieved with robust linear regression and well-chosen models. In addition, this study demonstrates that the WoS impact factor of SCI listed journals can be successfully estimated without using the Web of Science database and therefore the dependency of researchers and institutions to this popular database can be minimized. These results suggest that the statistical model developed here can be well applied to predict the WoS impact factor using alternative open-access databases. 


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


SICOT-J ◽  
2021 ◽  
Vol 7 ◽  
pp. 64
Author(s):  
Robert Cooke ◽  
Neil Jain

Background: The internet has changed the way we access and publish Orthopaedic literature. Traditional subscription journals have been challenged by the open access method of publication which permits the author to make their article available to all readers for free, often at a cost to the author. This has also been adopted in part by traditional subscription journals forming hybrid journals. One of the criticisms of open access publications is that it provides the author with a “pay to publish” opportunity. We aimed to determine if access to the journals impacts their influence. Methods: We selected the top 40 Trauma and Orthopaedic Journals as ranked by the SCImago Rank. Each journal was reviewed and assessed for the journal quality, defined by reviewing the journal impact factor and SCImago rank; influence, defined by reviewing the top 10 articles provided by the journal for the number of citations; and cost of open access publication. Results: Of the top 40 journals, 10 were subscription, 10 were open access, and 20 were hybrid journals. Subscription journals had the highest mean impact factor, and SCImago rank with a significant difference in the impact factor (p = 0.001) and SCImago rank (p = 0.021) observed between subscription and open access journals. No significant difference was seen between citation numbers of articles published in subscription and open access journals (p = 0.168). There was a positive correlation between the cost of publishing in an open access journal and the impact factor (r = 0.404) but a negative correlation between cost and the number of citations (r = 0.319). Conclusion: Open access journals have significantly lower quality measures in comparison to subscription journals. Despite this, we found no difference between the number of citations, suggestive of there being no difference in the influence of these journals in spite of the observed difference in quality.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1887 ◽  
Author(s):  
Daniel R. Shanahan

Background.The Journal Citation Reports journal impact factors (JIFs) are widely used to rank and evaluate journals, standing as a proxy for the relative importance of a journal within its field. However, numerous criticisms have been made of use of a JIF to evaluate importance. This problem is exacerbated when the use of JIFs is extended to evaluate not only the journals, but the papers therein. The purpose of this study was therefore to investigate the relationship between the number of citations and journal IF for identical articles published simultaneously in multiple journals.Methods.Eligible articles were consensus research reporting statements listed on the EQUATOR Network website that were published simultaneously in three or more journals. The correlation between the citation count for each article and the median journal JIF over the published period, and between the citation count and number of article accesses was calculated for each reporting statement.Results.Nine research reporting statements were included in this analysis, representing 85 articles published across 58 journals in biomedicine. The number of citations was strongly correlated to the JIF for six of the nine reporting guidelines, with moderate correlation shown for the remaining three guidelines (medianr= 0.66, 95% CI [0.45–0.90]). There was also a strong positive correlation between the number of citations and the number of article accesses (medianr= 0.71, 95% CI [0.5–0.8]), although the number of data points for this analysis were limited. When adjusted for the individual reporting guidelines, each logarithm unit of JIF predicted a median increase of 0.8 logarithm units of citation counts (95% CI [−0.4–5.2]), and each logarithm unit of article accesses predicted a median increase of 0.1 logarithm units of citation counts (95% CI [−0.9–1.4]). This model explained 26% of the variance in citations (median adjustedr2= 0.26, range 0.18–1.0).Conclusion.The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.


2021 ◽  
pp. 1-22
Author(s):  
Metin Orbay ◽  
Orhan Karamustafaoğlu ◽  
Ruben Miranda

This study analyzes the journal impact factor and related bibliometric indicators in Education and Educational Research (E&ER) category, highlighting the main differences among journal quartiles, using Web of Science (Social Sciences Citation Index, SSCI) as the data source. High impact journals (Q1) publish only slightly more papers than expected, which is different to other areas. The papers published in Q1 journal have greater average citations and lower uncitedness rates compared to other quartiles, although the differences among quartiles are lower than in other areas. The impact factor is only weakly negative correlated (r=-0.184) with the journal self-citation but strongly correlated with the citedness of the median journal paper (r= 0.864). Although this strong correlation exists, the impact factor is still far to be the perfect indicator for expected citations of a paper due to the high skewness of the citations distribution. This skewness was moderately correlated with the citations received by the most cited paper of the journal (r= 0.649) and the number of papers published by the journal (r= 0.484), but no important differences by journal quartiles were observed. In the period 2013–2018, the average journal impact factor in the E&ER has increased largely from 0.908 to 1.638, which is justified by the field growth but also by the increase in international collaboration and the share of papers published in open access. Despite their inherent limitations, the use of impact factors and related indicators is a starting point for introducing the use of bibliometric tools for objective and consistent assessment of researcher.


Sign in / Sign up

Export Citation Format

Share Document