scholarly journals Fallibility in science: Responding to errors in the work of oneself and others

Author(s):  
Dorothy V Bishop

Fallibility in science cuts both ways: it poses dilemmas for the scientist who discovers errors in their own work, and for those who discover errors in the work of others. The ethical response to finding errors in one's own work is clear: they should be claimed and corrected as rapidly as possible. Yet people are often reluctant to 'do the right thing' because of a perception this could lead to reputational damage. I argue that the best defence against such outcomes is adoption of open science practices, which help avoid errors and also leads to recognition that mistakes are part of normal science. Indeed, a reputation for scientific integrity can be enhanced by admitting to errors. The second part of the paper focuses on situations where errors are discovered in the work of others; in the case of honest errors, action must be taken to put things right, but this should be done in a collegial way that offers the researcher the opportunity to deal with the problem themselves. Difficulties arise if those who commit errors are unresponsive or reluctant to make changes, when there is disagreement about whether a dataset or analysis is problematic, or where deliberate manipulation of findings or outright fraud is suspected. I offer some guidelines about how to approach such cases. My key message is that for science to progress, we have to accept the inevitability of error. In the long run, scientists will not be judged on whether or not they make mistakes, but on how they respond when those mistakes are detected.

2017 ◽  
Author(s):  
Dorothy V Bishop

Fallibility in science cuts both ways: it poses dilemmas for the scientist who discovers errors in their own work, and for those who discover errors in the work of others. The ethical response to finding errors in one's own work is clear: they should be claimed and corrected as rapidly as possible. Yet people are often reluctant to 'do the right thing' because of a perception this could lead to reputational damage. I argue that the best defence against such outcomes is adoption of open science practices, which help avoid errors and also leads to recognition that mistakes are part of normal science. Indeed, a reputation for scientific integrity can be enhanced by admitting to errors. The second part of the paper focuses on situations where errors are discovered in the work of others; in the case of honest errors, action must be taken to put things right, but this should be done in a collegial way that offers the researcher the opportunity to deal with the problem themselves. Difficulties arise if those who commit errors are unresponsive or reluctant to make changes, when there is disagreement about whether a dataset or analysis is problematic, or where deliberate manipulation of findings or outright fraud is suspected. I offer some guidelines about how to approach such cases. My key message is that for science to progress, we have to accept the inevitability of error. In the long run, scientists will not be judged on whether or not they make mistakes, but on how they respond when those mistakes are detected.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S399-S399
Author(s):  
Derek M Isaacowitz ◽  
Jonathan W King

Abstract Scientists from many disciplines have recently suggested changes in research practices, with the goal of ensuring greater scientific integrity. Some suggestions have focused on reducing researcher degrees of freedom to extract significant findings from exploratory analyses, whereas others concern how best to power studies and analyze results. Yet others involve ensuring that other interested researchers can easily access study materials, code, and data, to help with re-analysis and/or replication. These changes are moving targets, with discussions and suggested practices ongoing. However, aging researchers have not yet been major participants in these discussions, and aging journals are just starting to consider open science policies. This symposium, sponsored by the GSA Publications Committee, will highlight transparency and open science practices that seem most relevant to aging researchers, discuss potential challenges to implementing them as well as reasons for doing so, and will consider how aging journals may implement these practices. Open science practices to be considered include: preregistration, open data, open materials and code, sample size justification and analytic tools for considering null effects. Presenters from a range of areas of aging research (lab, secondary data, qualitative) will show examples of open science practices in their work and will discuss concerns about, and challenges of, implementing them. Then, editorial team members will discuss the implications of these changes for aging journals. Finally, discussant Jon King will give NIA’s perspective on the importance of encouraging open science practices in the aging field.


1989 ◽  
Vol 43 (2) ◽  
pp. 35-40 ◽  
Author(s):  
Thomas Doherty

CFA Magazine ◽  
2010 ◽  
Vol 21 (5) ◽  
pp. 13-14
Author(s):  
Crystal Detamore
Keyword(s):  

2021 ◽  
pp. 089124162110218
Author(s):  
John R. Parsons

Every year, hundreds of U.S. citizens patrol the Mexican border dressed in camouflage and armed with pistols and assault rifles. Unsanctioned by the government, these militias aim to stop the movement of narcotics into the United States. Recent interest in the anthropology of ethics has focused on how individuals cultivate themselves toward a notion of the ethical. In contrast, within the militias, ethical self-cultivation was absent. I argue the volunteers derived the power to be ethical from the control of the dominant moral assemblage and the construction of an immoral “Other” which provided them the power to define a moral landscape that limited the potential for ethical conflicts. In the article, I discuss two instances Border Watch and its volunteers dismissed disruptions to their moral certainty and confirmed to themselves that their actions were not only the “right” thing to do, but the only ethical response available.


Author(s):  
Patrick W. Kraft ◽  
Ellen M. Key ◽  
Matthew J. Lebo

Abstract Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$ , to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.


Sign in / Sign up

Export Citation Format

Share Document