epistemic risk
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 15)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
pp. 180-210
Author(s):  
Jason Brennan

This chapter argues against both oligarchic and majoritarian rule by knowers, or epistocracies. Such regimes are necessarily blind to certain interests and perspectives, rendering them epistemically inferior to fully inclusive democracies over the long term. The chapter first consider the classic defense of Chinese-style epistocracy by Daniel Bell and then turns to the more puzzling rule by the knowledgeable 95% defended by Jason Brennan. While Bell’s Chinese model is much more vulnerable to epistemic failure due to the blindspots it structurally builds in its decision-process, even Brennan’s majoritarian epistocracy takes the unjustifiable epistemic risk of silencing what could be the most relevant voices on crucial issues.


2021 ◽  
pp. 75-92
Author(s):  
L. Syd M Johnson

Several types of inferences are common in the diagnosis and prognosis of brain injuries. These inferences, although necessary, introduce epistemic uncertainty. This chapter details the various inferences and considers the concept of inductive risk, introduced by Richard Rudner in the 1950s, and the problem of inductive risk: given uncertainty, what is the appropriate epistemic standard of evidence for accepting a scientific (or medical) hypothesis? Two principles of inductive risk are proposed to tackle the problem of inductive risk present in disorders of consciousness (and other medical contexts): the First Principle calls on us to index epistemic risk-taking to the level of ethical risk, thus constraining acceptable epistemic risk-taking. The Second Principle tells us to index ethical risk-taking to the level of epistemic risk, thus constraining ethical risk-taking to a level commensurate with epistemic uncertainty.


Religions ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 399
Author(s):  
Daniel Bonevac

John Calvin holds that the fall radically changed humanity’s moral and epistemic capacities. Recognizing that should lead Christian philosophers to see that philosophical questions require at least two sets of answers: one reflecting our nature and capacities before the fall, and the other reflecting our nature and capacities after the fall. Our prelapsarian knowledge of God, the right, and the good is direct and noninferential; our postlapsarian knowledge of them is mostly indirect, inferential, and filled with moral and epistemic risk. Only revelation can move us beyond fragmentary and indeterminate moral and theological knowledge.


2021 ◽  
Vol 19 (2) ◽  
Author(s):  
Bastian Steuwer

How should contractualists assess the permissibility of risky actions? Both main views on the question, ex ante and ex post, fail to distinguish between different kinds of risk. In this article, I argue that this overlooks a third alternative that I call “objective ex ante contractualism”. Objective ex ante substitutes discounting complaints by epistemic risk in favor of discounting by objective risk. I further argue in favor of this new view. Objective ex ante contractualism provides the best model of justifiability to each.


Author(s):  
Mark Timmons

Oxford Studies in Normative Ethics features new work in the field of normative ethical theory. This tenth volume features chapters on the following topics: defending deontology, justice as a personal virtue, willful ignorance and moral responsibility, moral obligation and epistemic risk, the so-called numbers problem in ethics, rule consequentialism, moral worth, respect and rational agency, a Kantian solution to the trolley problem, virtue and character, and the limits of virtue ethics....


Author(s):  
Zoë Johnson King ◽  
Boris Babic

This chapter concerns pernicious predictive inferences: taking someone to be likely to possess a socially disvalued trait based on statistical information about the prevalence of that trait within a social group to which she belongs. Some scholars have argued that pernicious predictive inferences are morally prohibited, but are sometimes epistemically required, leaving us with a tragic conflict between the requirements of epistemic rationality and those of morality. Others have responded by arguing that pernicious predictive inferences are sometimes epistemically prohibited. The present chapter takes a different approach, considering the sort of reluctance to draw pernicious predictive inferences that seems morally praiseworthy and vindicating its epistemic status. We argue that, even on a simple, orthodox Bayesian picture of the requirements of epistemic rationality, agents must consider the costs of error—including the associated moral and political costs—when forming and revising their credences. Our attitudes toward the costs of error determine how “risky” different credences are for us, and our epistemic states are justified in part by our attitudes toward epistemic risk. Thus, reluctance to draw pernicious predictive inferences need not be epistemically irrational, and the apparent conflict between morality and epistemic rationality is typically illusory.


2020 ◽  
pp. 1-21
Author(s):  
Justin B. Biddle

Abstract Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning (ML) systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems requires human decisions that involve tradeoffs that reflect values. In many cases, these decisions have significant—and, in some cases, disparate—downstream impacts on human lives. After examining an influential court decision regarding the use of proprietary recidivism-prediction algorithms in criminal sentencing, Wisconsin v. Loomis, the paper provides three recommendations for the use of ML in penal systems.


Sign in / Sign up

Export Citation Format

Share Document