scholarly journals Is preregistration worthwhile?

Author(s):  
Aba Szollosi ◽  
David Kellen ◽  
Danielle Navarro ◽  
Rich Shiffrin ◽  
Iris van Rooij ◽  
...  

Proponents of preregistration argue that, among other benefits, it improves the diagnosticity of statistical tests [1]. In the strong version of this argument, preregistration does this by solving statistical problems, such as family-wise error rates. In the weak version, it nudges people to think more deeply about their theories, methods, and analyses. We argue against both: the diagnosticity of statistical tests depend entirely on how well statistical models map onto underlying theories, and so improving statistical techniques does little to improve theories when the mapping is weak. There is also little reason to expect that preregistration will spontaneously help researchers to develop better theories (and, hence, better methods and analyses).

Author(s):  
Carleilton Severino Silva

Since 1742, the year in which the Prussian Christian Goldbach wrote a letter to Leonhard Euler with his Conjecture in the weak version, mathematicians have been working on the problem. The tools in number theory become the most sophisticated thanks to the resolution solutions. Euler himself said he was unable to prove it. The weak guess in the modern version states the following: any odd number greater than 5 can be written as the sum of 3 primes. In response to Goldbach's letter, Euler reminded him of a conversation in which he proposed what is now known as Goldbach's strong conjecture: any even number greater than 2 can be written as a sum of 2 prime numbers. The most interesting result came in 2013, with proof of weak version by the Peruvian Mathematician Harald Helfgott, however the strong version remained without a definitive proof. The weak version can be demonstrated without major difficulties and will not be described in this article, as it becomes a corollary of the strong version. Despite the enormous intellectual baggage that great mathematicians have had over the centuries, the Conjecture in question has not been validated or refuted until today.


1970 ◽  
Vol 18 (1) ◽  
pp. 73-92
Author(s):  
Yishai A. Cohen

In this paper I articulate and defend a new anti-theodicy challenge to Skeptical Theism. More specifically, I defend the Threshold Problem according to which there is a threshold to the kinds of evils that are in principle justifiable for God to permit, and certain instances of evil are beyond that threshold. I further argue that Skeptical Theism does not have the resources to adequately rebut the Threshold Problem. I argue for this claim by drawing a distinction between a weak and strong version of Skeptical Theism, such that the strong version must be defended in order to rebut the Threshold Problem. However, the skeptical theist’s appeal to our limited cognitive faculties only supports the weak version.


2013 ◽  
Author(s):  
Siouxsie Wiles ◽  
Anne L Bishop

As medical and molecular microbiologists who regularly read the scientific literature, it is our impression that many published papers contain data that is inappropriately presented and/or analysed. This is borne out by a number of studies which indicate that typically at least half of published scientific articles that use statistical methods contain statistical errors. While there are an abundance of resources dedicated to explaining statistics to biologists, the evidence would suggest that they are largely ineffective. These resources tend to focus on how particular statistical tests work, with reams of complicated-looking mathematical formulae. In addition, many statisticians are unfamiliar with the application of statistical techniques to molecular microbiology, instead telling us we need more samples, which can be difficult both ethically and practically in fields that include animal work and painstaking sample collection. In an age where performing a statistical test merely requires clicking a button in a computer programme, it could be argued that what the vast majority of biologists need is not mathematical formulae but simple guidance on which buttons to click. We have developed an easy to follow decision chart that guides biologists through the statistical maze. Our practical and user friendly chart should prove useful not only to active researchers, but also to journal editors and reviewers to rapidly determine if data presented in a submitted manuscript has been correctly analysed.


2004 ◽  
Vol 15 (3) ◽  
pp. 231-237 ◽  
Author(s):  
Gláucia Maria Bovi Ambrosano ◽  
André Figueiredo Reis ◽  
Marcelo Giannini ◽  
Antônio Carlos Pereira

A descriptive survey was performed in order to assess the statistical content and quality of Brazilian and international dental journals, and compare their evolution throughout the last decades. The authors identified the reporting and accuracy of statistical techniques in 1000 papers published from 1970 to 2000 in seven dental journals: three Brazilian (Brazilian Dental Journal, Revista de Odontologia da Universidade de São Paulo and Revista de Odontologia da UNESP) and four international journals (Journal of the American Dental Association, Journal of Dental Research, Caries Research and Journal of Periodontology). Papers were divided into two time periods: from 1970 to 1989, and from 1990 to 2000. A slight increase in the number of articles that presented some form of statistical technique was noticed for Brazilian journals (from 61.0 to 66.7%), whereas for international journals, a significant increase was observed (65.8 to 92.6%). In addition, a decrease in the number of statistical errors was verified. The most commonly used statistical tests as well as the most frequent errors found in dental journals were assessed. Hopefully, this investigation will encourage dental educators to better plan the teaching of biostatistics, and to improve the statistical quality of submitted manuscripts.


2013 ◽  
Author(s):  
Siouxsie Wiles ◽  
Anne L Bishop

As medical and molecular microbiologists who regularly read the scientific literature, it is our impression that many published papers contain data that is inappropriately presented and/or analysed. This is borne out by a number of studies which indicate that typically at least half of published scientific articles that use statistical methods contain statistical errors. While there are an abundance of resources dedicated to explaining statistics to biologists, the evidence would suggest that they are largely ineffective. These resources tend to focus on how particular statistical tests work, with reams of complicated-looking mathematical formulae. In addition, many statisticians are unfamiliar with the application of statistical techniques to molecular microbiology, instead telling us we need more samples, which can be difficult both ethically and practically in fields that include animal work and painstaking sample collection. In an age where performing a statistical test merely requires clicking a button in a computer programme, it could be argued that what the vast majority of biologists need is not mathematical formulae but simple guidance on which buttons to click. We have developed an easy to follow decision chart that guides biologists through the statistical maze. Our practical and user friendly chart should prove useful not only to active researchers, but also to journal editors and reviewers to rapidly determine if data presented in a submitted manuscript has been correctly analysed.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 113-113 ◽  
Author(s):  
A I Cogan

The hypothesis of labelled detectors (or ‘lines’) is the present-day version of the basic Müller - Helmholtz doctrine. Müller's dictum of specific energy of nerves stated: “the same internal cause excites (…) in each sense the sensation peculiar to it”. Helmholtz made ‘the cause’ external to the body and postulated that all knowledge about the world thus comes through the senses. The key word is specificity. The strong version of the hypothesis must treat detection - identification as a single task: a stimulus would be identified whenever it is detected. The weak version requires only that we identify a specific mechanism by which both detection and identification are achieved, even though the latter may require additional processing. In the general case, the strong version (with its ludicrous ‘grandmother cell’ as the neural substrate) finds little support. Detection and recognition of complex shapes (letters, faces, etc) aside, even discrimination between simple increments and decrements of luminance is difficult to attribute directly to a specific mechanism (in this case, the activity in either ON or OFF systems, respectively). This is demonstrated by experiment 1 reported here. However, perception of relative depth seems to conform to the strong version of the hypothesis, as experiment 2, also reported here, indicates. Thus, at least some specific neural mechanisms (in this case, probably the crossed and uncrossed disparity detectors) may be indeed linked directly to perception.


Author(s):  
Mohamed Elhadi Rahmani ◽  
Abdelmalek Amine

Computer modeling of ecological systems is the activity of implementing computer solutions to analyze data related to the fields of remote sensing, earth science, biology, and oceans. The ecologists analyze the data to identify the relationships between a response and a set of predictors, using statistical models that do not accurately describe the main sources of variation in the response variable. Knowledge discovery techniques are often more powerful, flexible, and effective for exploratory analysis than statistical techniques. This chapter aims to test the use of data mining in ecology. It will discuss the exploration of ecological data by defining at first data mining, its advantages, and its different types. Then the authors detail the field of bio-inspiration and meta-heuristics. And finally, they give case studies from where they applied these two areas to explore ecological data.


2008 ◽  
Vol 13 (1) ◽  
pp. 111-117
Author(s):  
Mostafa Taqavi ◽  
Mohammad Zarepour

The Polish researcher in the field of logic and philosophy, Jan Woleński, in one of his recent articles, “Metalogical Observations About the Underdetermination of Theories by Empirical Data,” logically formalized two weak and strong versions of the underdetermination of theories by empirical data (or UT by abbreviation) and with these formalization has metalogically analyzed these two versions. Finally he has deducted that the weak version is defensible while the strong version is not. In this paper we will critically study Woleński's analysis of the strong version of UT.


Sign in / Sign up

Export Citation Format

Share Document