scholarly journals General journal policies and guidelines for publication in Ciencias Marinas

2019 ◽  
Vol 45 (1) ◽  
pp. 23-45
Author(s):  
Melba De Jesus
1990 ◽  
Vol 78 (1) ◽  
pp. 1-1
Author(s):  
M. J. Brown

From this issue, Clinical Science will increase its page numbers from an average of 112 to 128 per monthly issue. This welcome change — equivalent to at least two manuscripts — has been ‘forced’ on us by the increasing pressure on space; this has led to an undesirable increase in the delay between acceptance and publication, and to a fall in the proportion of submitted manuscripts we have been able to accept. The change in page numbers will instead permit us now to return to our exceptionally short interval between acceptance and publication of 3–4 months; and at the same time we shall be able not only to accept (as now) those papers requiring little or no revision, but also to offer hope to some of those papers which have raised our interest but come to grief in review because of a major but remediable problem. Our view, doubtless unoriginal, has been that the review process, which is unusually thorough for Clinical Science, involving a specialist editor and two external referees, is most constructive when it helps the evolution of a good paper from an interesting piece of research. Traditionally, the papers in Clinical Science have represented some areas of research more than others. However, this has reflected entirely the pattern of papers submitted to us, rather than any selective interest of the Editorial Board, which numbers up to 35 scientists covering most areas of medical research. Arguably, after the explosion during the last decade of specialist journals, the general journal can look forward to a renaissance in the 1990s, as scientists in apparently different specialities discover that they are interested in the same substances, asking similar questions and developing techniques of mutual benefit to answer these questions. This situation arises from the trend, even among clinical scientists, to recognize the power of research based at the cellular and molecular level to achieve real progress, and at this level the concept of organ-based specialism breaks down. It is perhaps ironic that this journal, for a short while at the end of the 1970s, adopted — and then discarded — the name of Clinical Science and Molecular Medicine, since this title perfectly represents the direction in which clinical science, and therefore Clinical Science, is now progressing.


2015 ◽  
Vol 15 (2) ◽  
pp. 865-889 ◽  
Author(s):  
Ofer H. Azar

Abstract Research on the academic review process may help to improve research productivity. The article presents a model of the review process in a top journal, in which authors know their paper’s quality whereas referees obtain a noisy signal about quality. Increased signal noisiness, lower submission costs and more published papers all reduce the average quality of published papers in the journal. The model allows analyzing how the submission cost, the accuracy of referees and the number of published papers affect additional equilibrium characteristics. Implications of the model for journal policies are also discussed.


2019 ◽  
Author(s):  
Ineke Wessel ◽  
Helen Niemeyer

Adopting Registered Reports is an important step for the European Journal of Psychotraumatology to promote open science practices in the field of psychotrauma research. However, adopting these practices requires us as individual researchers to change our perspective fundamentally. We need to put fears of being scooped aside, adopt a permissive stance towards making mistakes and accept that null-results should be part of the scientific record. Journal policies that reinforce openness and transparency can facilitate such an attitude change in individual researchers.


2019 ◽  
Vol 131 (1) ◽  
pp. 264-270 ◽  
Author(s):  
Madeleine P. de Lotbiniere-Bassett ◽  
Jay Riva-Cambrin ◽  
Patrick J. McDonald

OBJECTIVEAn increasing amount of funding in neurosurgery research comes from industry, which may create a conflict of interest (COI) and the potential to bias results. The reporting and handling of COIs have become difficult, particularly as explicit policies themselves and definitions thereof continue to vary between medical journals. In this study, the authors sought to evaluate the prevalence and comprehensiveness of COI policies among leading neurosurgical journals.METHODSThe authors conducted a cross-sectional study of publicly available online disclosure policies in the 20 highest-ranking neurosurgical journals, as determined by Google Scholar Metrics, in July 2016.RESULTSOverall, 89.5% of the highest-impact neurosurgical journals included COI policy statements. Ten (53%) journals requested declaration of nonfinancial conflicts, while 2 journals specifically set a time period for COIs. Sixteen journals required declaration from the corresponding author, 13 from all authors, 6 from reviewers, and 5 from editors. Four journals were included in the International Committee of Medical Journal Editors (ICMJE) list of publications that follow the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (currently known as Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals). Five journal policies included COI declaration verification, management, or enforcement. The neurosurgery journals with more comprehensive COI policies were significantly more likely to have higher h5-indices (p = 0.003) and higher impact factors (p = 0.01).CONCLUSIONSIn 2016, the majority of, but not all, high-impact neurosurgical journals had publically available COI disclosure policies. Policy inclusiveness and comprehensiveness varied substantially across neurosurgical journals, but COI comprehensiveness was associated with other established markers of individual journals’ favorability and influence, such as impact factor and h5-index.


2006 ◽  
Vol 54 (6) ◽  
pp. 1711-1713 ◽  
Author(s):  
Patricia P. Katz ◽  
Edward H. Yelin ◽  
Michael D. Lockshin
Keyword(s):  

2012 ◽  
Vol 7 (6) ◽  
pp. 543-554 ◽  
Author(s):  
Marjan Bakker ◽  
Annette van Dijk ◽  
Jelte M. Wicherts

If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies.


Sign in / Sign up

Export Citation Format

Share Document