inductive risk
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 23)

H-INDEX

10
(FIVE YEARS 1)

2022 ◽  
Author(s):  
P. D. Magnus
Keyword(s):  

2021 ◽  
Vol 11 (4) ◽  
Author(s):  
Koray Karaca

AbstractI examine the construction and evaluation of machine learning (ML) binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML (binary) classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is underdetermined by the available data, and that this makes it necessary for ML modellers to make social value judgments in determining the error costs (associated with misclassifications) used in ML optimization. I thus suggest that the assessment of the inductive risk with respect to the social values of the intended users is an integral part of the construction and evaluation of ML classification models. I also discuss the implications of this conclusion for the philosophical debate concerning inductive risk.


2021 ◽  
pp. 75-92
Author(s):  
L. Syd M Johnson

Several types of inferences are common in the diagnosis and prognosis of brain injuries. These inferences, although necessary, introduce epistemic uncertainty. This chapter details the various inferences and considers the concept of inductive risk, introduced by Richard Rudner in the 1950s, and the problem of inductive risk: given uncertainty, what is the appropriate epistemic standard of evidence for accepting a scientific (or medical) hypothesis? Two principles of inductive risk are proposed to tackle the problem of inductive risk present in disorders of consciousness (and other medical contexts): the First Principle calls on us to index epistemic risk-taking to the level of ethical risk, thus constraining acceptable epistemic risk-taking. The Second Principle tells us to index ethical risk-taking to the level of epistemic risk, thus constraining ethical risk-taking to a level commensurate with epistemic uncertainty.


2021 ◽  
pp. 251-260
Author(s):  
L. Syd M Johnson

There are numerous contexts, beyond disorders of consciousness, where there is a need for decisive action in the presence of unavoidable epistemic uncertainty. The ethics of uncertainty can help. This chapter examines three complex decisional contexts with intersecting, interacting epistemic and ethical uncertainty. The first is pain. Pain, like consciousness, is a subjectively phenomenal experience, the quality and quantity of which are hard to put into words. Pain sufferers encounter testimonial injustice because of the subjectivity, invisibility, and objective uncertainty of pain. The second context is vaccine research and development, and the emergency approval of COVID-19 vaccines under conditions of time pressure and uncertainty. The third context is research with conscious nonhuman animals. There are known, certain risks of harm to the animals, but the benefits of the research are epistemically uncertain. Judging the permissibility of such research requires considering inductive risks, and the principles of inductive risk.


2021 ◽  
Vol 11 (3) ◽  
Author(s):  
Tobias Henschen

AbstractThe argument from inductive risk, as developed by Rudner and others, famously concludes that the scientist qua scientist makes value judgments. The paper aims to show that trust in the soundness of the argument is overrated – that philosophers who endorse its conclusion (especially Douglas and Wilholt) fail to refute two of the most important objections that have been raised to its soundness: Jeffrey’s objection that the genuine task of the scientist is to assign probabilities to (and not to accept or reject) hypotheses, and Levi’s objection that the argument is ambiguous about decisions about how to act and decisions about what to believe, that only the former presuppose value judgments, and that qua scientist, the scientist only needs to decide what to believe.


2021 ◽  
pp. 1-26
Author(s):  
Joyce C. Havstad

Abstract More than a decade of exacting scientific research involving paleontological fragments and ancient DNA has lately produced a series of pronouncements about a purportedly novel population of archaic hominins dubbed “the Denisova.” The science involved in these matters is both technically stunning and, socially, at times a bit reckless. Here I discuss the responsibilities which scientists incur when they make inductively risky pronouncements about the different relative contributions by Denisovans to genomes of members of apparent subpopulations of current humans (i.e., the so-called “races”). This science is sensational: it is science which empirically speculates, to the public delight’s and entertainment, about scintillating topics such as when humans evolved, where we came from, and who else we were having sex with during our early hominin history. An initial characterization of sensational science emerges from my discussion of the case, as well as a diagnosis of an interactive phenomenon termed amplified inductive risk.


SIMULATION ◽  
2021 ◽  
pp. 003754972110288
Author(s):  
Alejandro Cassini

Some philosophers of science have recently argued that the epistemic assessment of complex simulation models, such as climate models, cannot be free of the influence of social values. In their view, the assignment of probabilities to the different hypotheses or predictions that result from simulations presupposes some methodological decisions that rest on value judgments. In this article, I criticize this claim and put forward a Bayesian response to the arguments from inductive risk according to which the influence of social values on the calculation of probabilities is negligible. I conclude that the epistemic opacity of complex simulations, such as climate models, does not preclude the application of Bayesian methods.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-18
Author(s):  
Petr Spelda ◽  
Vit Stritecky

As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, then an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The article asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach “elsewhere” in space and time or deploy ML models in non-benign environments. The article argues that the only viable version of the contract can be based on optimality (instead of on reliability, which cannot be justified without circularity) and aligns this position with Schurz's optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (“elsewhere” and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.


Sign in / Sign up

Export Citation Format

Share Document