PP118 A Survival Analysis Of The Lag Times In The Publication Of Network Meta-Analyses

2021 ◽  
Vol 37 (S1) ◽  
pp. 20-20
Author(s):  
Fernanda S. Tonin ◽  
Ariane G. Araujo ◽  
Mariana M. Fachi ◽  
Roberto Pontarolo ◽  
Fernando Fernandez-Llimos

IntroductionThe use of inconsistent and outdated information may significantly compromise healthcare decision-making. We aimed to assess the extent of lag times in the publication and indexing of network meta-analyses (NMAs).MethodsSearches for NMAs on drug interventions were performed in PubMed (May 2020). Lag times were measured as the time between the last systematic search and the date of the article's submission, acceptance, online publication, indexing, and Medical Subject Heading (MeSH) allocation. Correlations between lag times and time trends were calculated by means of Spearman's rank correlation coefficient. Time-to-event analyses were performed considering independent variables such as geographical origin, journal impact factor, Scopus CiteScore, and open access status.ResultsWe included 1,245 NMAs. The median time from last search to article submission and publication was 6.8 months and 11.6 months, respectively. Only five percent of authors updated their literature searches after submission. There was a very slight decreasing historical trend for acceptance (r =−0.087; p = 0.01), online publication (r =−0.08; p = 0.008), and indexing lag times (r =−0.080; p = 0.007). Journal impact factor influenced the MeSH allocation process (log-rank p = 0.02). Slight differences were observed for acceptance, online publication, and indexing lag times when comparing open access and subscription journals.ConclusionsAuthors need to update their literature searches before submission to reduce evidence production time. Peer reviewers and editors should ensure that authors comply with NMA standards and encourage the development of living meta-analyses.

BMJ Open ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. e048581
Author(s):  
Fernanda S Tonin ◽  
Ariane G Araujo ◽  
Mariana M Fachi ◽  
Vinicius L Ferreira ◽  
Roberto Pontarolo ◽  
...  

ObjectiveWe assessed the extent of lag times in the publication and indexing of network meta-analyses (NMAs).Study designThis was a survey of published NMAs on drug interventions.SettingNMAs indexed in PubMed (searches updated in May 2020).Primary and secondary outcome measuresLag times were measured as the time between the last systematic search and the article submission, acceptance, online publication, indexing and Medical Subject Headings (MeSH) allocation dates. Time-to-event analyses were performed considering independent variables (geographical origin, Journal Impact Factor, Scopus CiteScore, open access status) (SPSS V.24, R/RStudio).ResultsWe included 1245 NMAs. The median time from last search to article submission was 6.8 months (204 days (IQR 95–381)), and to publication was 11.6 months. Only 5% of authors updated their search after first submission. There is a very slightly decreasing historical trend of acceptance (rho=−0.087; p=0.010), online publication (rho=−0.080; p=0.008) and indexing (rho=−0.080; p=0.007) lag times. Journal Impact Factor influenced the MeSH allocation process, but not the other lag times. The comparison between open access versus subscription journals confirmed meaningless differences in acceptance, online publication and indexing lag times.ConclusionEfforts by authors to update their search before submission are needed to reduce evidence production time. Peer reviewers and editors should ensure authors’ compliance with NMA standards. The accuracy of these findings depends on the accuracy of the metadata used; as we evaluated only NMA on drug interventions, results may not be generalisable to all types of studies.


2018 ◽  
Vol XVI (2) ◽  
pp. 369-388 ◽  
Author(s):  
Aleksandar Racz ◽  
Suzana Marković

Technology driven changings with consecutive increase in the on-line availability and accessibility of journals and papers rapidly changes patterns of academic communication and publishing. The dissemination of important research findings through the academic and scientific community begins with publication in peer-reviewed journals. Aim of this article is to identify, critically evaluate and integrate the findings of relevant, high-quality individual studies addressing the trends of enhancement of visibility and accessibility of academic publishing in digital era. The number of citations a paper receives is often used as a measure of its impact and by extension, of its quality. Many aberrations of the citation practices have been reported in the attempt to increase impact of someone’s paper through manipulation with self-citation, inter-citation and citation cartels. Authors revenues to legally extend visibility, awareness and accessibility of their research outputs with uprising in citation and amplifying measurable personal scientist impact has strongly been enhanced by on line communication tools like networking (LinkedIn, Research Gate, Academia.edu, Google Scholar), sharing (Facebook, Blogs, Twitter, Google Plus) media sharing (Slide Share), data sharing (Dryad Digital Repository, Mendeley database, PubMed, PubChem), code sharing, impact tracking. Publishing in Open Access journals. Many studies and review articles in last decade have examined whether open access articles receive more citations than equivalent subscription toll access) articles and most of them lead to conclusion that there might be high probability that open access articles have the open access citation advantage over generally equivalent payfor-access articles in many, if not most disciplines. But it is still questionable are those never cited papers indeed “Worth(less) papers” and should journal impact factor and number of citations be considered as only suitable indicators to evaluate quality of scientists? “Publish or perish” phrase usually used to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one’s career can now in 21. Century be reformulate into “Publish, be cited and maybe will not Perish”.


2018 ◽  
Author(s):  
LM Hall ◽  
AE Hendricks

AbstractBackgroundRecently, there has been increasing concern about the replicability, or lack thereof, of published research. An especially high rate of false discoveries has been reported in some areas motivating the creation of resource-intensive collaborations to estimate the replication rate of published research by repeating a large number of studies. The substantial amount of resources required by these replication projects limits the number of studies that can be repeated and consequently the generalizability of the findings.Methods and findingsIn 2013, Jager and Leek developed a method to estimate the empirical false discovery rate from journal abstracts and applied their method to five high profile journals. Here, we use the relative efficiency of Jager and Leek’s method to gather p-values from over 30,000 abstracts and to subsequently estimate the false discovery rate for 94 journals over a five-year time span. We model the empirical false discovery rate by journal subject area (cancer or general medicine), impact factor, and Open Access status. We find that the empirical false discovery rate is higher for cancer vs. general medicine journals (p = 5.14E-6). Within cancer journals, we find that this relationship is further modified by journal impact factor where a lower journal impact factor is associated with a higher empirical false discovery rates (p = 0.012, 95% CI: -0.010, -0.001). We find no significant differences, on average, in the false discovery rate for Open Access vs closed access journals (p = 0.256, 95% CI: -0.014, 0.051).ConclusionsWe find evidence of a higher false discovery rate in cancer journals compared to general medicine journals, especially those with a lower journal impact factor. For cancer journals, a lower journal impact factor of one point is associated with a 0.006 increase in the empirical false discovery rate, on average. For a false discovery rate of 0.05, this would result in over a 10% increase to 0.056. Conversely, we find no significant evidence of a higher false discovery rate, on average, for Open Access vs. closed access journals from InCites. Our results provide identify areas of research that may need of additional scrutiny and support to facilitate replicable science. Given our publicly available R code and data, others can complete a broad assessment of the empirical false discovery rate across other subject areas and characteristics of published research.


2020 ◽  
Vol 49 (5) ◽  
pp. 35-58
Author(s):  
Matthias Templ

This article is motivated by the work as editor-in-chief of the Austrian Journal of Statistics and contains detailed analyses about the impact of the Austrian Journal of Statistics. The impact of a journal is typically expressed by journal metrics indicators. One of the important ones, the journal impact factor is calculated from the Web of Science (WoS) database by Clarivate Analytics. It is known that newly established journals or journals without membership in big publishers often face difficulties to be included, e.g., in the Science Citation Index (SCI) and thus they do not receive a WoS journal impact factor, as it is the case for example, for the Austrian Journal of Statistics. In this study, a novel approach is pursued modeling and predicting the WoS impact factor of journals using open access or partly open-access databases, like Google Scholar, ResearchGate, and Scopus. I hypothesize a functional linear dependency between citation counts in these databases and the journal impact factor. These functional relationships enable the development of a model that may allow estimating the impact factor for new, small, and independent journals not listed in SCI. However, only good results could be achieved with robust linear regression and well-chosen models. In addition, this study demonstrates that the WoS impact factor of SCI listed journals can be successfully estimated without using the Web of Science database and therefore the dependency of researchers and institutions to this popular database can be minimized. These results suggest that the statistical model developed here can be well applied to predict the WoS impact factor using alternative open-access databases. 


2014 ◽  
Vol 57 (1) ◽  
Author(s):  
Fabio Florindo ◽  
Francesca Bianco ◽  
Paola De Michelis ◽  
Simona Masina ◽  
Giovanni Muscari ◽  
...  

<p>Annals of Geophysics is a bimonthly international journal, which publishes scientific papers in the field of geophysics sensu lato. It derives from Annali di Geofisica, which commenced publication in January 1948 as a quarterly periodical devoted to general geophysics, seismology, earth magnetism, and atmospheric studies. The journal was published regularly for a quarter of a century until 1982 when it merged with the French journal Annales de Géophysique to become Annales Geophysicae under the aegis of the European Geophysical Society. In 1981, this journal ceased publication of the section on solid earth geophysics, ending the legacy of Annali di Geofisica. In 1993, the Istituto Nazionale di Geofisica (ING), founder of the journal, decided to resume publication of its own journal under the same name, Annali di Geofisica. To ensure continuity, the first volume of the new series was assigned the volume number XXXVI (following the last issue published in 1982). In 2002, with volume XLV, the name of the journal was translated into English to become Annals of Geophysics and in consequence the journal impact factor counter was restarted. Starting in 2010, in order to improve its status and better serve the science community, Annals of Geophysics has instituted a number of editorial changes including full electronic open access, freely accessible online, the possibility to comment on and discuss papers online, and a board of editors representing Asia and the Americas as well as Europe. [...]</p>


2019 ◽  
Author(s):  
Amanda Costa Araujo Sr ◽  
Adriane Aver Vanin Sr ◽  
Dafne Port Nascimento Sr ◽  
Gabrielle Zoldan Gonzalez Sr ◽  
Leonardo Oliveira Pena Costa Sr

BACKGROUND The most common way to assess the impact of an article is based upon the number of citations. However, the number of citations do not precisely reflect if the message of the paper is reaching a wider audience. Currently, social media has been used to disseminate contents of scientific articles. In order to measure this type of impact a new tool named Altmetric was created. Altmetric aims to quantify the impact of each article through the media online. OBJECTIVE This overview of methodological reviews aims to describe the associations between the publishing journal and the publishing articles variables with Altmetric scores. METHODS Search strategies on MEDLINE, EMBASE, CINAHL, CENTRAL and Cochrane Library including publications since the inception until July 2018 were conducted. We extracted data related to the publishing trial and the publishing journal associated with Altmetric scores. RESULTS A total of 11 studies were considered eligible. These studies summarized a total of 565,352 articles. The variables citation counts, journal impact factor, access counts (i.e. considered as the sum of HTML views and PDF downloads), papers published as open access and press release generated by the publishing journal were associated with Altmetric scores. The magnitudes of these correlations ranged from weak to moderate. CONCLUSIONS Citation counts and journal impact factor are the most common associators of high Altmetric scores. Other variables such as access counts, papers published in open access journals and the use of press releases are also likely to influence online media attention. CLINICALTRIAL N/A


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
L. M. Hall ◽  
A. E. Hendricks

Abstract Background A low replication rate has been reported in some scientific areas motivating the creation of resource intensive collaborations to estimate the replication rate by repeating individual studies. The substantial resources required by these projects limits the number of studies that can be repeated and consequently the generalizability of the findings. We extend the use of a method from Jager and Leek to estimate the false discovery rate for 94 journals over a 5-year period using p values from over 30,000 abstracts enabling the study of how the false discovery rate varies by journal characteristics. Results We find that the empirical false discovery rate is higher for cancer versus general medicine journals (p = 9.801E−07, 95% CI: 0.045, 0.097; adjusted mean false discovery rate cancer = 0.264 vs. general medicine = 0.194). We also find that false discovery rate is negatively associated with log journal impact factor. A two-fold decrease in journal impact factor is associated with an average increase of 0.020 in FDR (p = 2.545E−04). Conversely, we find no statistically significant evidence of a higher false discovery rate, on average, for Open Access versus closed access journals (p = 0.320, 95% CI − 0.015, 0.046, adjusted mean false discovery rate Open Access = 0.241 vs. closed access = 0.225). Conclusions Our results identify areas of research that may need additional scrutiny and support to facilitate replicable science. Given our publicly available R code and data, others can complete a broad assessment of the empirical false discovery rate across other subject areas and characteristics of published research.


2020 ◽  
Vol 12 (02) ◽  
pp. e284-e291
Author(s):  
Ronaldo Nuesi ◽  
John Y. Lee ◽  
Ajay E. Kuriyan ◽  
Jayanth Sridhar

Abstract Objective This study aimed to explore the relationship between publishing speeds and peer-reviewed journal bibliometric measures in ophthalmology. Methods Journal Citation Reports and Scopus Database were accessed for identification of journal bibliometric measures in ophthalmology. Twelve randomly selected articles from 2018 for all identified journals were studied. All outcome measures were extracted from the full text of articles and correlated with journal bibliometric measures. Statistical analysis was performed on measured parameters in comparison to a previous study. Main Outcomes and Measures Journal impact factor, Eigenfactor score, and CiteScore were correlated with time from submission or acceptance of manuscripts to online and print publication. The correlation between study design and publishing speeds was also assessed. Results A total of 55 journals were included for a total of 657 articles. Online publications were significantly faster than print publications for almost every journal (p < 0.001). Laboratory experimental studies had significantly shorter times from submission to online publication (p = 0.002) and acceptance to online publication (p < 0.001) compared with observational and interventional studies. Journal impact factor was positively correlated to publishing speed from acceptance to online publication (p = 0.034). CiteScore was positively correlated to speed from submission to print publication (p = 0.04), acceptance to print publication (p = 0.013), and acceptance to online publication (p = 0.003). Eigenfactor score was not statistically significant when correlated with any outcome measures. Conclusion Online publication has increased speed of dissemination of knowledge in the ophthalmology literature. Despite reporting higher numbers of submissions every year, ophthalmology journals with higher bibliometric measures of impact tend to publish peer-reviewed articles faster than journals with lower impact scores. Study design of an article may affect its speed to publication.


Sign in / Sign up

Export Citation Format

Share Document