ON THE EPISTEMIC SIGNIFICANCE OF EVIDENCE YOU SHOULD HAVE HAD

Episteme ◽  
2016 ◽  
Vol 13 (4) ◽  
pp. 449-470 ◽  
Author(s):  
Sanford C. Goldberg

ABSTRACTElsewhere I and others have argued that evidence one should have had can bear on the justification of one's belief, in the form of defeating one's justification. In this paper, I am interested in knowing how evidence one should have had (on the one hand) and one's higher-order evidence (on the other) interact in determinations of the justification of belief. In doing so I aim to address two types of scenario that previous discussions have left open. In one type of scenario, there is a clash between a subject's higher-order evidence and the evidence she should have had: S's higher-order evidence is misleading as to the existence or likely epistemic bearing of further evidence she should have. In the other, while there is further evidence S should have had, this evidence would only have offered additional support for S's belief that p. The picture I offer derives from two “epistemic ceiling” principles linking evidence to justification: one's justification for the belief that p can be no higher than it is on one's total evidence, nor can it be higher than what it would have been had one had all of the evidence one should have had. Together, these two principles entail what I call the doctrine of Epistemic Strict Liability: insofar as one fails to have evidence one should have had, one is epistemically answerable to that evidence whatever reasons one happened to have regarding the likely epistemic bearing of that evidence. I suggest that such a position can account for the battery of intuitions elicited in the full range of cases I will be considering.

2018 ◽  
pp. 761-769
Author(s):  
Olga A. Ginatulina ◽  

The article analyzes the phenomenon of document as assessed in the study of value. To begin with, it poses a problem of contradictory axiological status of document in modern society. On the one hand, document is objectively important, as it completes certain practical tasks, and yet, on the other hand, documents and document management are receive a negative assessment in public consciousness. In order to understand this situation, the article analyzes the concept of ‘value’ and concludes that certain objects of the material world receive this status, if they are included in public practice and promote progress of society or human development. Although this abstract step towards a better understanding of values does not provide a comprehensive answer to the question of axiological nature of document, it however indicates a trend in development of thought towards analysis of the development of human nature. The document is an artifact that objectifies and reifies a certain side of human nature. Human nature is a heterogeneous phenomenon and exists on two levels. The first abstract level is represented by the human race and embodies the full range of universal features of humanity. The second level is the specific embodiment of generic universal human nature in specific historical type of individuals. Between these two levels there is a contradiction. On the one hand, man by nature tends toward universality, on the other hand, realization of his nature is limited by the frameworks of historical era and contributes to the development of only one side of the race. Accordingly, document has value only within a certain historical stage and conflicts with the trend of universal development of human nature, and thus receives a negative evaluation. However, emergence of a new type of work (general scientific work) will help to overcome this alienation between generic and limited individual human being, and therefore will make a great impact on the nature of document, making it more ‘human,’ thus increasing its value in the eyes of society.


Author(s):  
Declan Smithies

Chapter 10 explores a puzzle about epistemic akrasia: if you can have misleading higher-order evidence about what your evidence supports, then your total evidence can make it rationally permissible to be epistemically akratic. Section 10.1 presents the puzzle and three options for solving it: Level Splitting, Downward Push, and Upward Push. Section 10.2 argues that we should opt for Upward Push: you cannot have misleading higher-order evidence about what your evidence is or what it supports. Sections 10.3 and 10.4 defend Upward Push against David Christensen’s objection that it licenses irrational forms of dogmatism in ideal and nonideal agents alike. Section 10.5 responds to his argument that misleading higher-order evidence generates rational dilemmas in which you’re guaranteed to violate one of the ideals of epistemic rationality. Section 10.6 concludes with some general reflections on the nature of epistemic rationality and the role of epistemic idealization.


2019 ◽  
pp. 298-316
Author(s):  
Alex Worsnip

It’s fairly uncontroversial that you can sometimes get misleading higher-order evidence about what your first-order evidence supports. What is more controversial is whether this can result in a situation where your total evidence is misleading about what your total evidence supports: that is, where your total evidence is misleading about itself. It’s hard to arbitrate on purely intuitive grounds whether any particular example of misleading higher-order evidence is an example of misleading total evidence. This chapter tries to make progress by offering a simple mathematical model that suggests that higher-order evidence will tend to bear more strongly on higher-order propositions about what one’s evidence supports than it does on the corresponding first-order propositions; and then by arguing that given this, it is plausible that there will be some cases of misleading total evidence.


2021 ◽  
pp. 1-29
Author(s):  
Jon Truby ◽  
Rafael Dean Brown ◽  
Imad Antoine Ibrahim ◽  
Oriol Caudevilla Parellada

Abstract This paper argues for a sandbox approach to regulating artificial intelligence (AI) to complement a strict liability regime. The authors argue that sandbox regulation is an appropriate complement to a strict liability approach, given the need to maintain a balance between a regulatory approach that aims to protect people and society on the one hand and to foster innovation due to the constant and rapid developments in the AI field on the other. The authors analyse the benefits of sandbox regulation when used as a supplement to a strict liability regime, which by itself creates a chilling effect on AI innovation, especially for small and medium-sized enterprises. The authors propose a regulatory safe space in the AI sector through sandbox regulation, an idea already embraced by European Union regulators and where AI products and services can be tested within safeguards.


2011 ◽  
Vol 21 ◽  
pp. 674 ◽  
Author(s):  
Sandhya Sundaresan

The paper focuses on an interesting form of (person) indexical shift in the Dravidian language Tamil which surfaces as 1SG agreement marking in a clause embedded under a speech predicate. I show that this agreement is an instance of indexical shift and label it "monstrous agreement". However, I demonstrate that its full range of empirical properties cannot be adequately explained by the major analyses of indexical shift in the literature. The bulk of these, I argue, in addition to being predominantly semantic in spirit, and thus ill-equipped to deal with a morphosyntactic phenomenon like agreement, also involve two core misconceptions regarding indexicality vs. logophoricity on the one hand and speech vs. attitude predicates on the other. I propose that these core assumptions be strongly re-evaluated from first principles and that syntactic and typological clues on the subject be paid more heed. I propose a new analysis of the Tamil paradigms which derives indexical shift within an enriched grammatical model involving contextual features instantiated in a structurally articulated cartographic left periphery.


2019 ◽  
pp. 105-123
Author(s):  
Sophie Horowitz

Evidence can be misleading: it can rationalize raising one’s confidence in false propositions, and lowering one’s confidence in the truth. But can a rational agent know that her total evidence supports a (particular) falsehood? It seems not: if we could see ahead of time that our evidence supported a false belief, then we could avoid believing what our evidence supported, and hence avoid being misled. So, it seems, evidence cannot be predictably misleading. This chapter develops a new problem for higher-order evidence: it is predictably misleading. It then examines a radical strategy for explaining higher-order evidence, according to which there are two distinct epistemic norms at work in the relevant cases. Finally, the chapter suggests that mainstream accounts of higher-order evidence may be able to answer the challenge after all. But to do so, they must deny that epistemic rationality requires believing what is likely given one’s evidence.


Author(s):  
J. M. Rico ◽  
J. J. Cervantes ◽  
A. Tadeo ◽  
J. Gallardo ◽  
L. D. Aguilera ◽  
...  

In recent years, there has been a good deal of controversy about the application of infinitesimal kinematics to the mobility determination of kinematic chains. On the one hand, there has been several publications that promote the use of the velocity analysis, without any additional results, for the determination of the mobility of kinematic chains. On the other hand, the authors of this contribution have received several reviews of researchers who have the strong belief that no infinitesimal method can be used to correctly determine the mobility of kinematic chains. In this contributions, it is attempted to show that velocity analysis by itself can not correctly determine the mobility of kinematic chains. However, velocity and higher order analysis coupled with some recent results about the Lie algebra, se(3), of the Euclidean group, SE(3), can correctly determine the mobility of kinematic chains.


2021 ◽  
Vol 8 (0) ◽  
Author(s):  
Aleks Knoks

Thinking about misleading higher-order evidence naturally leads to a puzzle about epistemic rationality: If one’s total evidence can be radically misleading regarding itself, then two widely-accepted requirements of rationality come into conflict, suggesting that there are rational dilemmas. This paper focuses on an often misunderstood and underexplored response to this (and similar) puzzles, the so-called conflicting-ideals view. Drawing on work from defeasible logic, I propose understanding this view as a move away from the default meta-epistemological position according to which rationality requirements are strict and governed by a strong, but never explicitly stated logic, toward the more unconventional view, according to which requirements are defeasible and governed by a comparatively weak logic. When understood this way, the response is not committed to dilemmas.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1609
Author(s):  
Carlos Granero-Belinchón ◽  
Stéphane G. Roux ◽  
Nicolas B. Garnier

We introduce an index based on information theory to quantify the stationarity of a stochastic process. The index compares on the one hand the information contained in the increment at the time scale τ of the process at time t with, on the other hand, the extra information in the variable at time t that is not present at time t−τ. By varying the scale τ, the index can explore a full range of scales. We thus obtain a multi-scale quantity that is not restricted to the first two moments of the density distribution, nor to the covariance, but that probes the complete dependences in the process. This index indeed provides a measure of the regularity of the process at a given scale. Not only is this index able to indicate whether a realization of the process is stationary, but its evolution across scales also indicates how rough and non-stationary it is. We show how the index behaves for various synthetic processes proposed to model fluid turbulence, as well as on experimental fluid turbulence measurements.


1997 ◽  
Vol 57 ◽  
pp. 81-92
Author(s):  
Sarina Uilenberg

The present investigation was carried out in Holambra, a community of Dutch origin in Brazil. The goal was to analyze the codeswitching between Dutch and Portuguese practised by the immigrants in their everyday speech, taking into account both grammatical and functional aspects. Moreover, the codeswitching of the first and second generations were compared, focusing on the different motives, the size of switched constituents, and the type of codeswitching. Previous theories suggested a relationship between grammatical characteristics on the one hand, and functions of individual switches, attitudes towards the languages and communities involved, and language ability on the other hand. In this article, results of the three analyses are presented and the language use and codeswitching of the different generations in this community are described. The results show an intermediate generation consisting of the most balanced bilinguals, who codeswitch often and without difficulties, using the full range of both languages. The first and second generations, however, show less diversity in their codeswitching, mainly switching nouns. Finally, suggestions for future investigation are presented.


Sign in / Sign up

Export Citation Format

Share Document