scholarly journals Epistemic Value and the Jamesian Goals

Author(s):  
Sophie Horowitz

William James famously tells us that there are two main goals for rational believers: believing truth and avoiding error. Horowitz argues that epistemic consequentialism—in particular its embodiment in epistemic utility theory—seems to be well positioned to explain how epistemic agents might permissibly weight these goals differently. After all, practical versions of consequentialism render it permissible for agents with different goals to act differently in the same situation. Nevertheless, Horowitz argues that epistemic consequentialism doesn’t allow for this kind of permissivism and goes on to argue that this reveals a deep disanalogy between decision theory and the formally similar epistemic utility theory. This raises the question whether epistemic utility theory is a genuinely consequentialist theory at all.

Author(s):  
James M. Joyce

Joyce focuses on trade-off objections to epistemic consequentialism. Such objections are similar to familiar objections from ethics where an intuitively wrong action (e.g., killing a healthy patient) leads to a net gain in value (e.g., saving five other patients). The objection to the epistemic consequentialist concerns cases where adopting an intuitively wrong belief leads to a net gain in epistemic value. Joyce defends the epistemic consequentialist against such objections by denying that his version of epistemic utility theory is properly thought of as a species of epistemic consequentialism, and given this, does not condone the problematic trade-offs. His argument turns on distinguishing between treating degrees of belief as final ends and treating them as a basis for estimation.


Episteme ◽  
2015 ◽  
Vol 13 (3) ◽  
pp. 253-268 ◽  
Author(s):  
Richard Pettigrew

ABSTRACTFamously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James' claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus answers a question left open in Pettigrew (2014).


Analysis ◽  
2019 ◽  
Vol 79 (4) ◽  
pp. 658-669 ◽  
Author(s):  
Florian Steinberger

Abstract Epistemic utility theory (EUT) is generally coupled with veritism. Veritism is the view that truth is the sole fundamental epistemic value. Veritism, when paired with EUT, entails a methodological commitment: norms of epistemic rationality are justified only if they can be derived from considerations of accuracy alone. According to EUT, then, believing truly has epistemic value, while believing falsely has epistemic disvalue. This raises the question as to how the rational believer should balance the prospect of true belief against the risk of error. A strong intuitive case can be made for a kind of epistemic conservatism – that we should disvalue error more than we value true belief. I argue that none of the ways in which advocates of veritist EUT have sought to motivate conservatism can be squared with their methodological commitments. Short of any such justification, they must therefore either abandon their most central methodological principle or else adopt a permissive line with respect to epistemic risk.


An important issue in epistemology concerns the source of epistemic normativity. Epistemic consequentialism maintains that epistemic norms are genuine norms in virtue of the way in which they are conducive to epistemic value, whatever epistemic value may be. So, for example, the epistemic consequentialist might say that it is a norm that beliefs should be consistent in virtue of the fact that holding consistent beliefs is the best way to achieve the epistemic value of accuracy. Thus epistemic consequentialism is structurally similar to the familiar family of consequentialist views in ethics. Recently, philosophers from both formal epistemology and traditional epistemology have shown interest in such a view. In formal epistemology, there has been particular interest in thinking of epistemology as a kind of decision theory where instead of maximizing expected utility one maximizes expected epistemic utility. In traditional epistemology, there has been particular interest in various forms of reliabilism about justification and whether such views are analogous to—and so face similar problems to—versions of rule consequentialism in ethics. This volume presents some of the most recent work on these topics as well as others related to epistemic consequentialism, by authors that are sympathetic to the view and those who are critical of it.


Author(s):  
Richard Pettigrew

Pettigrew focuses on trade-off objections to epistemic consequentialism. Such objections are similar to familiar objections from ethics where an intuitively wrong action (e.g., killing a healthy patient) leads to a net gain in value (e.g., saving five other patients). The objection to the epistemic consequentialist concerns cases where adopting an intuitively wrong belief leads to a net gain in epistemic value. Pettigrew defends the epistemic consequentialist against such objections by accepting that the unintuitive verdicts of consequentialism are unintuitive, but offering an error theory for why these intuitions do not show the view to be false.


Author(s):  
Christopher J. G. Meacham

Meacham takes aim at the epistemic utility theory picture of epistemic norms where epistemic utility functions measure the value of degrees of belief and where the norms encode ways of adopting non-dominated degrees of belief. He focuses on a particularly popular subclass of such views where epistemic utility is determined solely by the accuracy of degrees of belief. Meacham argues that these types of epistemic utility arguments for norms are (i) not compatible with each other (so not all can be correct), (ii) do not solely rely on accuracy considerations, and (iii) are not able to capture intuitive norms about how we ought to respond to evidence.


2020 ◽  
pp. 344-360
Author(s):  
Daniel Y. Elstein ◽  
C.S.I. Jenkins

Friends of Wright-entitlement cannot appeal to direct epistemic consequentialism (believe or accept what maximizes expected epistemic value) in order to account for the epistemic rationality of accepting Wright-entitled propositions. The tenability of direct consequentialism is undermined by the “Truth Fairy”: a powerful being who offers you great epistemic reward (in terms of true beliefs) if you accept a proposition p for which you have evidence neither for nor against. However, this chapter argues that a form of indirect epistemic consequentialism seems promising as a way to deal with the Truth Fairy problem. The relevant form of indirect consequentialism accommodates evidentialism but allows for exceptions in the case of anti-sceptical hypotheses. Since these are the kind of propositions to which Wright-entitlement is supposed to apply—i.e. cornerstone propositions—indirect consequentialism is entitlement-friendly.


Author(s):  
Kazuhisa Takemura

Behavioral decision theory is a descriptive psychological theory of human judgment, decision making, and behavior that can be applied to political science. Behavioral decision theory is closely related to behavioral economics and behavioral finance. Behavioral economics is an attempt to understand actual human economic behavior, and behavioral finance studies human behavior in financial markets. Research on people’s decision making represents an important part of these fields, in which various aspects overlap with the scope of behavioral decision theory. Behavioral decision theory focuses on the decision-making phenomena that are broadly divisible into those under certainty, those under risk, and others under uncertainty that includes ambiguity and ignorance. What are the theoretical frameworks that could be used to explain the decision-making phenomenon? Although numerous theories related to decision making have been developed, they are, in essence, often broadly divided into two types: normative theory and descriptive theory. The former is intended to support rational decision making. The latter describes how people actually make decisions. Both normative and descriptive theories reflect the nature of actual human decision making to a degree. Even descriptive theory seeks a certain level of rationality in actual human decision making. Consequently, the two are mutually indistinguishable. Nonetheless, a major example of normative theory is regarded as the system of utility theory that is widely used in economics. A salient example of descriptive theory is behavioral decision theory. Utility theory has numerous variations, such as linear and nonlinear utility theories. Most theories have established axioms and mathematically developed principles. In contrast, behavioral decision theory covers a considerably wide range of variations of theoretical expressions, including theories that have been developed mathematically (such as prospect theory) and those expressed only with natural language (such as multiattribute decision-making process models). Behavioral decision theory has integrated the implications of the normative theory, descriptive theory, and prescriptive theory that help people to make better decisions.


Sign in / Sign up

Export Citation Format

Share Document