Why Inductive Risk Requires Values in Science

Author(s):  
Heather Douglas
Author(s):  
Justin B. Biddle ◽  
Rebecca Kukla

At each stage of inquiry, actions, choices, and judgments carry with them a chance that they will lead to mistakes and false conclusions. One of the most vigorously discussed kinds of epistemic risk is inductive risk—that is, the risk of inferring a false positive or a false negative from statistical evidence. This chapter develops a more fine-grained typology of epistemic risks and argues that many of the epistemic risks that have been classified as inductive risks are actually better seen as examples of a more expansive category, which this paper dubs “phronetic risk.” This more fine-grained typology helps to show that values in science often operate not exclusively at the level of individual psychologies but also at the level of knowledge-generating social institutions.


Author(s):  
Robin Andreasen ◽  
Heather Doty

The focus of this chapter is on the argument from inductive risk in the context of social science research on disparate impact in employment outcomes. It identifies three types of situations in the testing of scientific theories, not sufficiently emphasized in the inductive risk literature, that raise considerations of inductive risk: choice of significance test, choice of how to measure disparate impact, and the operationalization of scientific variables. It argues that non-epistemic values have a legitimate role in two of these situations but not in the third. It uses this observation to build on the discussion of when and under what conditions considerations of inductive risk help to justify a role for non-epistemic values in science.


Author(s):  
Heather Douglas

After describing the origins and nature of the value-free ideal for science, this chapter details three challenges to the ideal: the descriptive challenge (arising from feminist critiques of science, which led to deeper examinations of social structures in science), the boundary challenge (which questioned whether epistemic values can be distinguished from nonepistemic values), and the normative challenge (which questioned the ideal qua ideal on the basis of inductive risk and scientific responsibility). The chapter then discusses alternative ideals for values in science, including recent arguments regarding epistemic values, arguments distinguishing direct from indirect roles for values, and arguments calling for more attention to getting the values right. Finally, the chapter turns to the many ways in which values influence science and the importance of getting a richer understanding of the place of science within society in order to address the questions about the place of values in science.


2000 ◽  
Vol 67 (4) ◽  
pp. 559-579 ◽  
Author(s):  
Heather Douglas

2013 ◽  
Vol 80 (5) ◽  
pp. 829-839 ◽  
Author(s):  
Matthew J. Brown

2020 ◽  
Vol 28 (6) ◽  
pp. 737-763
Author(s):  
O. Çağlar Dede

I examine how Heather Douglas’ account of values in science applies to the assessment of actual cases of scientific practice. I focus on the case of applied toxicologists’ acceptance of molecular evidence-gathering methods and evidential sources. I demonstrate that a set of social and institutional processes plays a philosophically significant role in changing toxicologists’ inductive risk judgments about different kinds of evidence. I suggest that Douglas’ inductive risk framework can be integrated with a suitable account of evidence, such as Helen Longino’s contextual empiricism, to address the role of social context in the cases like the one examined here. I introduce such an integrated account and show how Longino’s contextual empiricism and inductive risk framework fruitfully complement each other in analyzing the novel aspects of the toxicology case.


Author(s):  
Inmaculada de Melo-Martín ◽  
Kristen Intemann

This chapter considers another factor that plays a role in eroding the public’s trust in science: concerns about the negative influence of nonepistemic values in science, particularly in controversial areas of inquiry with important effects on public policy. It shows that the credibility of scientists can be undermined when the public perceives that scientists have a political agenda or will be biased by their own personal or political values. However, to assume that the best way to address this problem is try to eliminate such values from science altogether would be a mistake. Ethical and social values are necessary and important to knowledge production. Consequently, the chapter explores alternative strategies to increase transparency and stakeholder involvement so as to address legitimate concerns about bias and sustain warranted trust in scientific communities.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-18
Author(s):  
Petr Spelda ◽  
Vit Stritecky

As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, then an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The article asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach “elsewhere” in space and time or deploy ML models in non-benign environments. The article argues that the only viable version of the contract can be based on optimality (instead of on reliability, which cannot be justified without circularity) and aligns this position with Schurz's optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (“elsewhere” and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.


Sign in / Sign up

Export Citation Format

Share Document