citation counts
Recently Published Documents


TOTAL DOCUMENTS

227
(FIVE YEARS 68)

H-INDEX

31
(FIVE YEARS 5)

Author(s):  
Pachisa Kulkanjanapiban ◽  
Tipawan Silwattananusarn

<p>This paper shows a significant comparison of two primary bibliographic data sources at the document level of Scopus and Dimensions. The emphasis is on the differences in their document coverage by institution level of aggregation. The main objective is to assess whether Dimensions offers at the institutional level good new possibilities for bibliometric analysis as at the global level. The results of a comparative study of the citation count profiles of articles published by faculty members of Prince of Songkla University (PSU) in Dimensions and Scopus from the year the databases first included PSU-authored papers (1970 and 1978, respectively) through the end of June 2020. Descriptive statistics and correlation analysis of 19,846 articles indexed in Dimensions and 13,577 indexed in Scopus. The main finding was that the number of citations received by Dimensions was highly correlated with citation counts in Scopus. Spearman’s correlation between citation counts in Dimensions and Scopus was a high and mighty relationship. The findings mainly affect Dimensions’ possibilities as instruments for carrying out bibliometric analysis of university members’ research productivity. University researchers can use Dimensions to retrieve information, and the design policies can be used to evaluate research using <br />scientific databases.</p>


Author(s):  
Mita Williams

In the beginning (of bibliometrics), citation counts of academic research were generated to be used in annual calculations to express a research journal’s impact. Now those same citation counts make up a social graph of scholarly communication that is used to measure the research strengths of authors, the hotness of their papers, the topic prominence of their disciplines, and assess the strength of the institutions where they are employed. More troubling, the publishers of this emerging social graph are in the process of enclosing scholarship by trying to exclude the infrastructure of libraries and other independent, non-profit organizations invested in research. This paper will outline efforts currently being employed by scholarly communication librarians using platforms built by organizations such as Our Research’s UnPaywall and Wikimedia’s Wikidata Project so that the commons of scholarship can remain open. Strategies will be shared so that researchers can adapt their workflows so that they might allow their work to be copied, shared, and be found by readers widely across the commons. Scholars will be asked to make good choices.


2021 ◽  
Vol 15 (4) ◽  
pp. 101203
Author(s):  
Ke Dong ◽  
Jiang Wu ◽  
Kaili Wang
Keyword(s):  

2021 ◽  
Vol 11 (19) ◽  
pp. 9288
Author(s):  
Eunhye Park ◽  
Woohyuk Kim

In line with the qualitative and quantitative growth of academic papers, it is critical to understand the factors driving citations in scholarly articles. This study discovered the up-to-date academic structure in the tourism and hospitality literature and tested the comprehensive sets of factors driving citation counts using articles published in first-tier hospitality and tourism journals found on the Web of Science. To further test the effects of research topic structure on citation counts, unsupervised topic modeling was conducted with 9910 tourism and hospitality papers published in 12 journals over 10 years. Articles specific to online media and the sharing economy have received numerous citations and that recently published papers with particular research topics (e.g., rural tourism and eco-tourism) were frequently cited. This study makes a major contribution to hospitality and tourism literature by testing the effects of topic structure and topic originality discovered by text mining on citation counts.


2021 ◽  
pp. 1-53
Author(s):  
Tzu-Kun Hsiao ◽  
Jodi Schneider

Abstract We present the first database-wise study on the citation contexts of retracted papers, which covers 7,813 retracted papers indexed in PubMed, 169,434 citations collected from iCite, and 48,134 citation contexts identified from the XML version of the PubMed Central Open Access Subset. Compared with previous citation studies that focused on comparing citation counts using two time frames (i.e., pre-retraction and post-retraction), our analyses show the longitudinal trends of citations to retracted papers in the past 60 years (1960-2020). Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed. Analysis of the text progression of pre- and post-retraction citation contexts shows that retraction did not change the way the retracted papers were cited. Furthermore, among the 13,252 post-retraction citation contexts, only 722 (5.4%) citation contexts acknowledged the retraction. In these 722 citation contexts, the retracted papers were most commonly cited as related work or as an example of problematic science. Our findings deepen the understanding of why retraction does not stop citation and demonstrate that the vast majority of post-retraction citations in biomedicine do not document the retraction. Peer Review https://publons.com/publon/10.1162/qss_a_00155


OEconomia ◽  
2021 ◽  
pp. 461-471
Author(s):  
James Forder

2021 ◽  
Author(s):  
Timothy Francis Bainbridge ◽  
steven ludeke ◽  
Luke D. Smillie

The Big Five is often represented as an effective taxonomy of psychological traits, yet little research has empirically examined whether stand-alone assessments of psychological traits can be located within the Big Five framework. Meanwhile, construct proliferation has created difficulty navigating the resulting landscape. In the present research, we developed criteria for assessing whether the Big Five provides a comprehensive organizing framework for psychological trait scales, and evaluated this question across three samples (Total N = 1,039). Study 1 revealed that 83% of an author-identified collection of scales (e.g., Self-Esteem, Grit, etc.) were as related to the Big Five as at least 4 of 30 Big Five facets and Study 2 found that 71% of scales selected based on citation counts passed the same criterion. Several scales had strikingly large links at the Big Five facet level, registering correlations with individual Big Five facets exceeding 0.9. We conclude that the Big Five can indeed serve as an organizing framework for a sizable majority of stand-alone psychological trait scales and that many of these scales could reasonably be labeled as facets of the Big Five. We recommend an integrative pluralism approach, where reliable, valid scales, are located within the Big Five and pertinent Big Five research is considered in all research using trait scales readily located within the Big Five. By adopting such an approach, construct proliferation may be abated and it would become easier to integrate findings from disparate fields.


2021 ◽  
pp. 1-29
Author(s):  
Marzieh Shahmandi ◽  
Paul Wilson ◽  
Mike Thelwall

Abstract Quantile regression presents a complete picture of the effects on the location, scale, and shape of the dependent variable at all points, not just the mean. We focus on two challenges for citation count analysis by quantile regression: discontinuity and substantial mass points at lower counts. A Bayesian hurdle quantile regression model for count data with a substantial mass point at zero was proposed by King and Song (2019). It uses quantile regression for modeling the nonzero data and logistic regression for modeling the probability of zeros versus nonzeros. We show that substantial mass points for low citation counts will nearly certainly also affect parameter estimation in the quantile regression part of the model, similar to a mass point at zero. We update the King and Song model by shifting the hurdle point past the main mass points. This model delivers more accurate quantile regression for moderately to highly cited articles, especially at quantiles corresponding to values just beyond the mass points, and enables estimates of the extent to which factors influence the chances that an article will be low cited. To illustrate the potential of this method, it is applied to simulated citation counts and data from Scopus. Peer Review https://publons.com/publon/10.1162/qss_a_00147


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Amanda Costa Araujo ◽  
Adriane Aver Vanin ◽  
Dafne Port Nascimento ◽  
Gabrielle Zoldan Gonzalez ◽  
Leonardo Oliveira Pena Costa

Abstract Background Social media has been used to disseminate the contents of scientific articles. To measure the impact of this, a new tool called Altmetric was created. Altmetric aims to quantify the impact of each article through online media. This systematic review aims to describe the associations between the publishing journal and published article variables and Altmetric scores. Methods Searches on MEDLINE, EMBASE, CINAHL, CENTRAL, and Cochrane Library were conducted. We extracted data related to both the publishing article and the publishing journal associated with Altmetric scores. The methodological quality of included articles was analyzed by the Appraisal Tool for Cross-sectional Studies. Results A total of 19 articles were considered eligible. These articles summarized a total of 573,842 studies. Citation counts, journal impact factor, access counts, papers published as open access, and press releases generated by the publishing journal were associated with Altmetric scores. The magnitude of these associations ranged from weak to strong. Conclusion Citation counts and journal impact factor are the most common variables associated with Altmetric scores. Other variables such as access counts, papers published in open access journals, and the use of press releases are also likely to be associated with online media attention. Systematic review registration This review does not contain health-related outcomes. Therefore, it is not eligible for registration.


Sign in / Sign up

Export Citation Format

Share Document