Predicting Long-Term Citation Impact of Articles in Social and Personality Psychology

2010 ◽  
Vol 106 (3) ◽  
pp. 891-900 ◽  
Author(s):  
Nick Haslam ◽  
Peter Koval

The citation impact of a comprehensive sample of articles published in social and personality psychology journals in 1998 was evaluated. Potential predictors of the 10-yr. citation impact of 1,580 articles from 37 journals were investigated, including number of authors, number of references, journal impact factor, author nationality, and article length, using linear regression. The impact factor of the journal in which articles appeared was the primary predictor of the citations that they accrued, accounting for 30% of the total variance. Articles with greater length, more references, and more authors were cited relatively often, although the citation advantage of longer articles was not proportionate to their length. A citation advantage was also enjoyed by authors from the United States of America, Canada, and the United Kingdom. 37% of the variance in the total number of citations was accounted for by the study variables.

1985 ◽  
Vol 11 (3) ◽  
pp. 169-182 ◽  
Author(s):  
Scott Newton

Most commentators on the 1949 sterling crisis have viewed it as an episode with implications merely for the management of the British economy. This paper, based on the public records now available, discusses the impact of the crisis on British economic foreign policy. In particular it suggests that the crisis revealed deep Anglo-American differences, centring on the nature of the Marshall Plan, on the international value of the sterling area, and on the proper relationship between the United Kingdom and Western Europe, Ultimately the British succeeded in resolving these disagreements: but this triumph ironically implied both the defeat of British aims in post-war European reconstruction and a long term delusion that great power status could be maintained on the basis of a special relationship-with the United States.


2015 ◽  
Author(s):  
Isabelle Cook ◽  
Sam Grange ◽  
Adam Eyre-Walker

We have investigated the relationship between research group size and productivity in the life sciences in the United Kingdom using data from 398 principle investigators (PIs). We show that the number of publications increases linearly with group size, but that the slope is modest relative to the intercept, and that the relationship explains little of the variance in productivity. A comparison of the slope and intercept suggests that PIs contribute on average 5-times more productivity than an average group member and using multiple regression we estimate that post-doctoral researchers are approximately 3–times more productive than PhD students. We also find that the impact factor and the number of citations are both non-linearly related to group size such that there is a maximum. However, the relationships explain little of the variance and the curvatures are shallow so the impact factor and the number of citations do not greatly depend upon group size. The intercept is large relative to curvature suggesting that the PI is largely responsible for the impact factor and the number of citations from their group. Surprisingly we find this non-linear relationship for post-docs, but for PhD students we observe a slight but significant decrease in the impact factor. The results have important implications for the funding of research. Given a set number of Pis there is no evidence of diminishing returns in terms of the number of papers published and only a very weak cost to very large groups in terms of where those papers are published and the number of citations they receive. However, the results do suggest that it might be more productive to invest in new permanent members of faculty rather than additional post-docs and PhD students.


2020 ◽  
Author(s):  
John Antonakis ◽  
Nicolas Bastardoz ◽  
Philippe Jacquart

The impact factor has been criticized on several fronts, including that the distribution of citations to journal articles is heavily skewed. We nuance these critiques and show that the number of citations an article receives is significantly predicted by journal impact factor. Thus, impact factor can be used as a reasonably good proxy of article quality.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


Author(s):  
Aamir R. Memon ◽  
Quyen G. To ◽  
Corneel Vandelanotte

Background: To date, no citation analysis has been conducted in the physical activity field, which can contribute to assess the impact of this research field and identify knowledge gaps. Therefore, this study aimed to identify the 500 most cited physical activity publications and report their bibliometric characteristics. Methods: The Web of Science database (all database indexes) was searched, and bibliometric characteristics were imported and calculated. Results: A total of 520 publications were ranked as the top 500. The sum of the citations was 326,258, and the average citation density was 41.0 (45.1) citations per year. Original research articles constituted the major portion of included publications (53.7%; 170,774 citations). Papers reporting relationship of physical activity with health were the most prevalent type of publication included (43.7%; 141,027 citations). Journal impact factor had a weak but significant positive correlation with citation density (r = .12; P = .006). The United States was ranked first in terms of the contributions from institutions and authors contributing to the most cited physical activity papers. Conclusions: Top physical activity publications are well cited compared with other health behavior fields. Original research reporting on the associations between physical activity and health has a higher citation impact compared with other types of original research within the physical activity field. The physical activity research field continues to expand rapidly as newer publications attract more citations in a shorter time span compared with older publications.


2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


2015 ◽  
Author(s):  
Isabelle Cook ◽  
Sam Grange ◽  
Adam Eyre-Walker

We have investigated the relationship between research group size and productivity in the life sciences in the United Kingdom using data from 398 principle investigators (PIs). We show that the number of publications increases linearly with group size, but that the slope is modest relative to the intercept, and that the relationship explains little of the variance in productivity. A comparison of the slope and intercept suggests that PIs contribute on average 5-times more productivity than an average group member and using multiple regression we estimate that post-doctoral researchers are approximately 3–times more productive than PhD students. We also find that the impact factor and the number of citations are both non-linearly related to group size such that there is a maximum. However, the relationships explain little of the variance and the curvatures are shallow so the impact factor and the number of citations do not greatly depend upon group size. The intercept is large relative to curvature suggesting that the PI is largely responsible for the impact factor and the number of citations from their group. Surprisingly we find this non-linear relationship for post-docs, but for PhD students we observe a slight but significant decrease in the impact factor. The results have important implications for the funding of research. Given a set number of Pis there is no evidence of diminishing returns in terms of the number of papers published and only a very weak cost to very large groups in terms of where those papers are published and the number of citations they receive. However, the results do suggest that it might be more productive to invest in new permanent members of faculty rather than additional post-docs and PhD students.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 366
Author(s):  
Ludo Waltman ◽  
Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.


2017 ◽  
Vol 7 (3) ◽  
pp. 62 ◽  
Author(s):  
Kent V. Rondeau

This essay explores and examines how rankings and league tables have played (and continue to play) a major andconsequential role in how contemporary business schools manage their affairs. It introduces and advances theproposition that rankings promote the short-term manipulation of public reputation (image) projected by businessschools at the expense of the long-term investments in quality improvement. When schools shift scarce resources toactions aimed at enhancing their public image in the short-term, the consequences for the quality of the professionaleducation is significantly compromised in the long-term to the detriment of the constituencies that they serve. Whilethis paper focuses mainly on business schools in the United States and Canada, where this author has experiencedthese consequences first-hand, the effects are similar if perhaps less dramatic, for those professional businessprograms located in higher education institutions operating in the United Kingdom and Europe. While rankingsystems are not going away anytime soon, some potential ways are identified for business schools to escape thedeleterious and perverse effects of being captive players in the deadly rankings game.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1887 ◽  
Author(s):  
Daniel R. Shanahan

Background.The Journal Citation Reports journal impact factors (JIFs) are widely used to rank and evaluate journals, standing as a proxy for the relative importance of a journal within its field. However, numerous criticisms have been made of use of a JIF to evaluate importance. This problem is exacerbated when the use of JIFs is extended to evaluate not only the journals, but the papers therein. The purpose of this study was therefore to investigate the relationship between the number of citations and journal IF for identical articles published simultaneously in multiple journals.Methods.Eligible articles were consensus research reporting statements listed on the EQUATOR Network website that were published simultaneously in three or more journals. The correlation between the citation count for each article and the median journal JIF over the published period, and between the citation count and number of article accesses was calculated for each reporting statement.Results.Nine research reporting statements were included in this analysis, representing 85 articles published across 58 journals in biomedicine. The number of citations was strongly correlated to the JIF for six of the nine reporting guidelines, with moderate correlation shown for the remaining three guidelines (medianr= 0.66, 95% CI [0.45–0.90]). There was also a strong positive correlation between the number of citations and the number of article accesses (medianr= 0.71, 95% CI [0.5–0.8]), although the number of data points for this analysis were limited. When adjusted for the individual reporting guidelines, each logarithm unit of JIF predicted a median increase of 0.8 logarithm units of citation counts (95% CI [−0.4–5.2]), and each logarithm unit of article accesses predicted a median increase of 0.1 logarithm units of citation counts (95% CI [−0.9–1.4]). This model explained 26% of the variance in citations (median adjustedr2= 0.26, range 0.18–1.0).Conclusion.The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.


Sign in / Sign up

Export Citation Format

Share Document