La "religione della valutazione", tra oligopoli editoriali e "pubblicitŕ del sapere"

2012 ◽  
pp. 94-100
Author(s):  
Isabella Gagliardi

The paper analyzes the relationship between the economic interests and the scientific research using as intepretative key the genesis and the aims of the evaluation systems used in scholarly fields. It gives a particular attention to bibliometric indices - such as the hindex and the journal impact factor - now used to judge the scientific nature of a paper, but which in reality were developed by publishing companies for directing the purchases by university libraries. Afterward it shows how the commercial nature of the indices provokes distortions of the research, depriving it of its freedom and reducing its potential innovation. Finally the paper indicates the clever use of the semantic web (institutional repositories in open access) as a possible way out of this "impasse".

2019 ◽  
Author(s):  
Erin C. McKiernan ◽  
Lesley A. Schimanski ◽  
Carol Muñoz Nieves ◽  
Lisa Matthias ◽  
Meredith T. Niles ◽  
...  

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.


2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.


2018 ◽  
Author(s):  
LM Hall ◽  
AE Hendricks

AbstractBackgroundRecently, there has been increasing concern about the replicability, or lack thereof, of published research. An especially high rate of false discoveries has been reported in some areas motivating the creation of resource-intensive collaborations to estimate the replication rate of published research by repeating a large number of studies. The substantial amount of resources required by these replication projects limits the number of studies that can be repeated and consequently the generalizability of the findings.Methods and findingsIn 2013, Jager and Leek developed a method to estimate the empirical false discovery rate from journal abstracts and applied their method to five high profile journals. Here, we use the relative efficiency of Jager and Leek’s method to gather p-values from over 30,000 abstracts and to subsequently estimate the false discovery rate for 94 journals over a five-year time span. We model the empirical false discovery rate by journal subject area (cancer or general medicine), impact factor, and Open Access status. We find that the empirical false discovery rate is higher for cancer vs. general medicine journals (p = 5.14E-6). Within cancer journals, we find that this relationship is further modified by journal impact factor where a lower journal impact factor is associated with a higher empirical false discovery rates (p = 0.012, 95% CI: -0.010, -0.001). We find no significant differences, on average, in the false discovery rate for Open Access vs closed access journals (p = 0.256, 95% CI: -0.014, 0.051).ConclusionsWe find evidence of a higher false discovery rate in cancer journals compared to general medicine journals, especially those with a lower journal impact factor. For cancer journals, a lower journal impact factor of one point is associated with a 0.006 increase in the empirical false discovery rate, on average. For a false discovery rate of 0.05, this would result in over a 10% increase to 0.056. Conversely, we find no significant evidence of a higher false discovery rate, on average, for Open Access vs. closed access journals from InCites. Our results provide identify areas of research that may need of additional scrutiny and support to facilitate replicable science. Given our publicly available R code and data, others can complete a broad assessment of the empirical false discovery rate across other subject areas and characteristics of published research.


2020 ◽  
Vol 49 (5) ◽  
pp. 35-58
Author(s):  
Matthias Templ

This article is motivated by the work as editor-in-chief of the Austrian Journal of Statistics and contains detailed analyses about the impact of the Austrian Journal of Statistics. The impact of a journal is typically expressed by journal metrics indicators. One of the important ones, the journal impact factor is calculated from the Web of Science (WoS) database by Clarivate Analytics. It is known that newly established journals or journals without membership in big publishers often face difficulties to be included, e.g., in the Science Citation Index (SCI) and thus they do not receive a WoS journal impact factor, as it is the case for example, for the Austrian Journal of Statistics. In this study, a novel approach is pursued modeling and predicting the WoS impact factor of journals using open access or partly open-access databases, like Google Scholar, ResearchGate, and Scopus. I hypothesize a functional linear dependency between citation counts in these databases and the journal impact factor. These functional relationships enable the development of a model that may allow estimating the impact factor for new, small, and independent journals not listed in SCI. However, only good results could be achieved with robust linear regression and well-chosen models. In addition, this study demonstrates that the WoS impact factor of SCI listed journals can be successfully estimated without using the Web of Science database and therefore the dependency of researchers and institutions to this popular database can be minimized. These results suggest that the statistical model developed here can be well applied to predict the WoS impact factor using alternative open-access databases. 


BMJ Open ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. e048581
Author(s):  
Fernanda S Tonin ◽  
Ariane G Araujo ◽  
Mariana M Fachi ◽  
Vinicius L Ferreira ◽  
Roberto Pontarolo ◽  
...  

ObjectiveWe assessed the extent of lag times in the publication and indexing of network meta-analyses (NMAs).Study designThis was a survey of published NMAs on drug interventions.SettingNMAs indexed in PubMed (searches updated in May 2020).Primary and secondary outcome measuresLag times were measured as the time between the last systematic search and the article submission, acceptance, online publication, indexing and Medical Subject Headings (MeSH) allocation dates. Time-to-event analyses were performed considering independent variables (geographical origin, Journal Impact Factor, Scopus CiteScore, open access status) (SPSS V.24, R/RStudio).ResultsWe included 1245 NMAs. The median time from last search to article submission was 6.8 months (204 days (IQR 95–381)), and to publication was 11.6 months. Only 5% of authors updated their search after first submission. There is a very slightly decreasing historical trend of acceptance (rho=−0.087; p=0.010), online publication (rho=−0.080; p=0.008) and indexing (rho=−0.080; p=0.007) lag times. Journal Impact Factor influenced the MeSH allocation process, but not the other lag times. The comparison between open access versus subscription journals confirmed meaningless differences in acceptance, online publication and indexing lag times.ConclusionEfforts by authors to update their search before submission are needed to reduce evidence production time. Peer reviewers and editors should ensure authors’ compliance with NMA standards. The accuracy of these findings depends on the accuracy of the metadata used; as we evaluated only NMA on drug interventions, results may not be generalisable to all types of studies.


2014 ◽  
Vol 57 (1) ◽  
Author(s):  
Fabio Florindo ◽  
Francesca Bianco ◽  
Paola De Michelis ◽  
Simona Masina ◽  
Giovanni Muscari ◽  
...  

<p>Annals of Geophysics is a bimonthly international journal, which publishes scientific papers in the field of geophysics sensu lato. It derives from Annali di Geofisica, which commenced publication in January 1948 as a quarterly periodical devoted to general geophysics, seismology, earth magnetism, and atmospheric studies. The journal was published regularly for a quarter of a century until 1982 when it merged with the French journal Annales de Géophysique to become Annales Geophysicae under the aegis of the European Geophysical Society. In 1981, this journal ceased publication of the section on solid earth geophysics, ending the legacy of Annali di Geofisica. In 1993, the Istituto Nazionale di Geofisica (ING), founder of the journal, decided to resume publication of its own journal under the same name, Annali di Geofisica. To ensure continuity, the first volume of the new series was assigned the volume number XXXVI (following the last issue published in 1982). In 2002, with volume XLV, the name of the journal was translated into English to become Annals of Geophysics and in consequence the journal impact factor counter was restarted. Starting in 2010, in order to improve its status and better serve the science community, Annals of Geophysics has instituted a number of editorial changes including full electronic open access, freely accessible online, the possibility to comment on and discuss papers online, and a board of editors representing Asia and the Americas as well as Europe. [...]</p>


2021 ◽  
Vol 37 (S1) ◽  
pp. 20-20
Author(s):  
Fernanda S. Tonin ◽  
Ariane G. Araujo ◽  
Mariana M. Fachi ◽  
Roberto Pontarolo ◽  
Fernando Fernandez-Llimos

IntroductionThe use of inconsistent and outdated information may significantly compromise healthcare decision-making. We aimed to assess the extent of lag times in the publication and indexing of network meta-analyses (NMAs).MethodsSearches for NMAs on drug interventions were performed in PubMed (May 2020). Lag times were measured as the time between the last systematic search and the date of the article's submission, acceptance, online publication, indexing, and Medical Subject Heading (MeSH) allocation. Correlations between lag times and time trends were calculated by means of Spearman's rank correlation coefficient. Time-to-event analyses were performed considering independent variables such as geographical origin, journal impact factor, Scopus CiteScore, and open access status.ResultsWe included 1,245 NMAs. The median time from last search to article submission and publication was 6.8 months and 11.6 months, respectively. Only five percent of authors updated their literature searches after submission. There was a very slight decreasing historical trend for acceptance (r =−0.087; p = 0.01), online publication (r =−0.08; p = 0.008), and indexing lag times (r =−0.080; p = 0.007). Journal impact factor influenced the MeSH allocation process (log-rank p = 0.02). Slight differences were observed for acceptance, online publication, and indexing lag times when comparing open access and subscription journals.ConclusionsAuthors need to update their literature searches before submission to reduce evidence production time. Peer reviewers and editors should ensure that authors comply with NMA standards and encourage the development of living meta-analyses.


2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


Sign in / Sign up

Export Citation Format

Share Document