3. Citation Statistics, Scientometrics

Author(s):  
Gábor Lövei
Keyword(s):  
Author(s):  
I.Yu. Yurchenko ◽  

This article continues a series of annual reviews within the framework of publications of materials of a 20-year study by the author of the latest historiography of the Cossacks and the latest Cossack studies, as a new direction of comprehensive research of the historical, cultural and social phenomenon of the Cossacks.


2020 ◽  
Vol 32 (4) ◽  
pp. 869-896 ◽  
Author(s):  
Pavitra Dhamija ◽  
Surajit Bag

Purpose“Technological intelligence” is the capacity to appreciate and adapt technological advancements, and “artificial intelligence” is the key to achieve persuasive operational transformations in majority of contemporary organizational set-ups. Implicitly, artificial intelligence (the philosophies of machines to think, behave and perform either same or similar to humans) has knocked the doors of business organizations as an imperative activity. Artificial intelligence, as a discipline, initiated by scientist John McCarthy and formally publicized at Dartmouth Conference in 1956, now occupies a central stage for many organizations. Implementation of artificial intelligence provides competitive edge to an organization with a definite augmentation in its social and corporate status. Mere application of a concept will not furnish real output until and unless its performance is reviewed systematically. Technological changes are dynamic and advancing at a rapid rate. Subsequently, it becomes highly crucial to understand that where have the people reached with respect to artificial intelligence research. The present article aims to review significant work by eminent researchers towards artificial intelligence in the form of top contributing universities, authors, keywords, funding sources, journals and citation statistics.Design/methodology/approachAs rightly remarked by past researchers that reviewing is learning from experience, research team has reviewed (by applying systematic literature review through bibliometric analysis) the concept of artificial intelligence in this article. A sum of 1,854 articles extracted from Scopus database for the year 2018–2019 (31st of May) with selected keywords (artificial intelligence, genetic algorithms, agent-based systems, expert systems, big data analytics and operations management) along with certain filters (subject–business, management and accounting; language-English; document–article, article in press, review articles and source-journals).FindingsResults obtained from cluster analysis focus on predominant themes for present as well as future researchers in the area of artificial intelligence. Emerged clusters include Cluster 1: Artificial Intelligence and Optimization; Cluster 2: Industrial Engineering/Research and Automation; Cluster 3: Operational Performance and Machine Learning; Cluster 4: Sustainable Supply Chains and Sustainable Development; Cluster 5: Technology Adoption and Green Supply Chain Management and Cluster 6: Internet of Things and Reverse Logistics.Originality/valueThe result of review of selected studies is in itself a unique contribution and a food for thought for operations managers and policy makers.


Publications ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 17 ◽  
Author(s):  
Bo-Christer Björk ◽  
Sari Kanto-Karvonen ◽  
J. Tuomas Harviainen

Predatory journals are Open Access journals of highly questionable scientific quality. Such journals pretend to use peer review for quality assurance, and spam academics with requests for submissions, in order to collect author payments. In recent years predatory journals have received a lot of negative media. While much has been said about the harm that such journals cause to academic publishing in general, an overlooked aspect is how much articles in such journals are actually read and in particular cited, that is if they have any significant impact on the research in their fields. Other studies have already demonstrated that only some of the articles in predatory journals contain faulty and directly harmful results, while a lot of the articles present mediocre and poorly reported studies. We studied citation statistics over a five-year period in Google Scholar for 250 random articles published in such journals in 2014 and found an average of 2.6 citations per article, and that 56% of the articles had no citations at all. For comparison, a random sample of articles published in the approximately 25,000 peer reviewed journals included in the Scopus index had an average of 18, 1 citations in the same period with only 9% receiving no citations. We conclude that articles published in predatory journals have little scientific impact.


2009 ◽  
Vol 111 (2) ◽  
pp. 387-392 ◽  
Author(s):  
Janet Lee ◽  
Kristin L. Kraus ◽  
William T. Couldwell

Object Assessing academic productivity through simple quantification may overlook key information, and the use of statistical enumeration of academic output is growing. The h index, which incorporates both the total number of publications and the citations of those publications, has been recently proposed as an objective measure of academic productivity. The authors used several tools to calculate the h index for academic neurosurgeons to provide a basis for evaluating publishing by physicians. Methods The h index of randomly selected academic neurosurgeons from a sample of one-third of the academic programs in the US was calculated using data from Google Scholar and from the Scopus database. The mean h index for each academic rank was determined. The h indices were also correlated with various other factors (such as time spent practicing neurosurgery, authorship position) to identify how these factors influenced the h index. The h indices were then compared with other citation statistics to evaluate the robustness of this metric. Finally, h indices were also calculated for a sampling of physicians in other medical specialties for comparison. Results As expected, the h index increased with academic rank and there was a statistically significant difference between each rank. A weighting based on position of authorship did not affect h indices. The h index was positively correlated with time since American Board of Neurological Surgery certification, and it was also correlated with other citation metrics. A comparison among medical specialties supports the assertion that h index values may not be comparable between fields, even closely related specialties. Conclusions The h index appears to be a robust statistic for comparing academic output of neurosurgeons. Within the field of academic neurosurgery, clear differences of h indices between academic ranks exist. On average, an increase of the h index by 5 appears to correspond to the next highest academic rank, with the exception of chairperson. The h index can be used as a tool, along with other evaluations, to evaluate an individual's productivity in the academic advancement process within the field of neurosurgery but should not be used for comparisons across medical specialties.


2009 ◽  
Vol 24 (1) ◽  
pp. 27-28 ◽  
Author(s):  
Robert Adler ◽  
John Ewing ◽  
Peter Taylor
Keyword(s):  

2013 ◽  
Vol 31 (16) ◽  
pp. 1897-1898
Author(s):  
Arun S. Mujumdar
Keyword(s):  

Nature ◽  
2002 ◽  
Vol 415 (6868) ◽  
pp. 101-101 ◽  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document