scholarly journals The symmetry problem for testimonial conservatism

Synthese ◽  
2021 ◽  
Author(s):  
Matthew Jope

AbstractA prima facie plausible and widely held view in epistemology is that the epistemic standards governing the acquisition of testimonial knowledge are stronger than the epistemic standards governing the acquisition of perceptual knowledge. Conservatives about testimony hold that we need prior justification to take speakers to be reliable but recognise that the corresponding claim about perception is practically a non-starter. The problem for conservatives is how to establish theoretically significant differences between testimony and perception that would support asymmetrical epistemic standards. In this paper I defend theoretical symmetry of testimony and perception on the grounds that there are no good reasons for taking these two belief forming methods to have significant theoretical differences. I identify the four central arguments in defence of asymmetry and show that in each case either they fail to establish the difference that they purport to establish or they establish a difference that is not theoretically significant.

Current understanding of the formation of circumstellar discs as a natural accompaniment to the process of low-mass star formation is briefly reviewed. Models of the thermal emission from the dust discs around the prototype stars a Lyr, a PsA, P Pic and 8 Eri are discussed, which indicate that the central regions of three of these discs are almost devoid of dust within radii ranging between 17 and 26 AU, with the temperature of the hottest dust lying between about 115 and 210 K. One possible explanation of the dust-free zones is the presence of a planet at the inner boundary of each cloud that sweeps up grains crossing its orbit. The discs have outer radii that range between about 250 and 800 AU and have dust masses that are unlikely to exceed about 300 Earth masses. Assuming a gas: dust ratio of 100:1 for the pre-mainsequence disc this corresponds to a mass of ca. 0.1 M Q comparable to that of the premain-sequence star HL Tau. The colour, diameter and thickness of the optical image of P Pic, obtained by coronagraphic techniques, have provided further information on the size, radial distribution of number density and orbital inclination of the grains. The difference in surface brightness on the two sides of the disc is puzzling, but might be explained if the grains are elongated and aligned by the combined effects of a stellar wind and a magnetic field of spiral configuration. Finally, we discuss the orbital evolution and lifetimes of particles in these discs, which are governed primarily by radiation pressure, Poynting-Robertson drag and grain-grain collisions. Although replenishment of these discs may be occurring, for example by grains ejected from comets, discs of initial radius ca. 1000 AU can survive Poynting-Robertson depletion over the stellar age and there is no prima facie evidence as yet in favour of a balance between sources and sinks of dust.


Episteme ◽  
2013 ◽  
Vol 10 (3) ◽  
pp. 283-297 ◽  
Author(s):  
Dan Cavedon-Taylor

AbstractPictures are a quintessential source of aesthetic pleasure. This makes it easy to forget that they are epistemically valuable no less than they are aesthetically so. Pictures are representations. As such, they may furnish us with knowledge of the objects they represent. In this article I provide an account of why photographs are of greater epistemic utility than handmade pictures. To do so, I use a novel approach: I seek to illuminate the epistemic utility of photographs by situating both photographs and handmade pictures among the sources of knowledge. This method yields an account of photography's epistemic utility that better connects the issue with related issues in epistemology and is relatively superior to other accounts. Moreover, it answers a foundational issue in the epistemology of pictorial representation: ‘What kinds of knowledge do pictures furnish?’ I argue that photographs have greater epistemic utility than handmade pictures because photographs are sources of perceptual knowledge, while handmade pictures are sources of testimonial knowledge.


Episteme ◽  
2019 ◽  
pp. 1-16
Author(s):  
Laura Frances Callahan

AbstractSubjectivist permissivism is a prima facie attractive view. That is, it's plausible to think that what's rational for people to believe on the basis of their evidence can vary if they have different frameworks or sets of epistemic standards. In this paper, I introduce an epistemic existentialist form of subjectivist permissivism, which I argue can better address “the arbitrariness objection” to subjectivist permissivism in general. According to the epistemic existentialist, it's not just that what's rational to believe on the basis of evidence can vary according to agents’ frameworks, understood as passive aspects of individuals’ psychologies. Rather, what's rational to believe on the basis of evidence is sensitive to agents’ choices and active commitments (as are frameworks themselves). Here I draw on Chang's work on commitment and voluntarist reasons. The epistemic existentialist maintains that what's rational for us to believe on the basis of evidence is, at least in part, up to us. It can vary not only across individuals but for a single individual, over time, as she makes differing epistemic commitments.


Episteme ◽  
2016 ◽  
Vol 14 (4) ◽  
pp. 519-538 ◽  
Author(s):  
Robert Mark simpson

ABSTRACTPermissivism says that for some propositions and bodies of evidence, there is more than one rationally permissible doxastic attitude that can be taken towards that proposition given the evidence. Some critics of this view argue that it condones, as rationally acceptable, sets of attitudes that manifest an untenable kind of arbitrariness. I begin by providing a new and more detailed explication of what this alleged arbitrariness consists in. I then explain why Miriam Schoenfield's prima facie promising attempt to answer the Arbitrariness Objection, by appealing to the role of epistemic standards in rational belief formation, fails to resolve the problem. Schoenfield's strategy is, however, a useful one, and I go on to explain how an alternative form of the standards-based approach to Permissivism – one that emphasizes the significance of the relationship between people's cognitive abilities and the epistemic standards that they employ – can respond to the arbitrariness objection.


Author(s):  
Jonathan Stoltz

This book provides readers with an introduction to epistemology within the Buddhist intellectual tradition. It is designed to be accessible to those whose primary background is in the “Western” tradition of philosophy and who have little or no previous exposure to Buddhist philosophical writings. The book examines many of the most important topics in the field of epistemology, topics that are central both to contemporary discussions of epistemology and to the classical Buddhist tradition of epistemology in India and Tibet. Among the topics discussed are Buddhist accounts of the nature of knowledge episodes, the defining conditions of perceptual knowledge and of inferential knowledge, the status of testimonial knowledge, and skeptical criticisms of the entire project of epistemology. The book seeks to put the field of Buddhist epistemology in conversation with contemporary debates in philosophy. It shows that many of the arguments and debates occurring within classical Buddhist epistemological treatises coincide with the arguments and disagreements found in contemporary epistemology. The book shows, for example, how Buddhist epistemologists developed an anti-luck epistemology—one that is linked to a sensitivity requirement for knowledge. Likewise, the book explores the question of how the study of Buddhist epistemology can be of relevance to contemporary debates about the value of contributions from experimental epistemology, and to broader debates concerning the use of philosophical intuitions about knowledge.


1984 ◽  
Vol 9 (1) ◽  
pp. 139-186 ◽  
Author(s):  
Paul Meier ◽  
Jerome Sacks ◽  
Sandy L. Zabell

Tests of statistical significance have increasingly been used in employment discrimination cases since the Supreme Court's decision in Hazelwood. In that case, the United States Supreme Court ruled that “in a proper case” statistical evidence can suffice for a prima facie showing of employment discrimination. The Court also discussed the use of a binomial significance test to assess whether the difference between the proportion of black teachers employed by the Hazelwood School District and the proportion of black teachers in the relevant labor market was substantial enough to indicate discrimination. The Equal Employment Opportunity Commission has proposed a somewhat stricter standard for evaluating how substantial a difference must be to constitute evidence of discrimination. Under the so-called 80% rule promulgated by the EEOC, the difference must not only be statistically significant, but the hire rate for the allegedly discriminated group must also be less than 80% of the rate for the favored group. This article argues that a binomial statistical significance test standing alone is unsatisfactory for evaluating allegations of discrimination because many of the assumptions on which such tests are based are inapplicable to employment settings; the 80% rule is a more appropriate standard for evaluating whether a difference in hire rates should be treated as a prima facie showing of discrimination.


2020 ◽  
Vol 24 (3) ◽  
pp. 332-356
Author(s):  
Billy Wheeler ◽  

We are becoming increasingly dependent on robots and other forms of artificial intelligence for our beliefs. But how should the knowledge gained from the “say-so” of a robot be classified? Should it be understood as testimonial knowledge, similar to knowledge gained in conversation with another person? Or should it be understood as a form of instrument-based knowledge, such as that gained from a calculator or a sundial? There is more at stake here than terminology, for how we treat objects as sources of knowledge often has important social and legal consequences. In this paper, I argue that at least some robots are capable of testimony. I make my argument by exploring the differences between instruments and testifiers on a well-known account of knowledge: reliabilism. On this approach, I claim that the difference between instruments and testifiers as sources of knowledge is that only the latter are capable of deception. As some robots can be designed to deceive, so they too should be recognized as testimonial sources of knowledge.


Utilitas ◽  
1998 ◽  
Vol 10 (3) ◽  
pp. 261-280 ◽  
Author(s):  
David Wiggins

David Ross made the first sustained attack on Moore's agathistic utilitarianism or ethical neutralism – the first attack, that is, on a consequentialism purified of ethical naturalism. Ross started out with an important idea about the difference (in the sphere of action) between the right and the good, and a good appreciation of the dialectical situation about consequentialism. His attack, based on the personal character of duty, is greatly hampered by his imperfect account of the duty of beneficence and the supposed general prima facie duty to promote the good. In due course, duties of other kinds come to appear as exceptions to this duty – a damaging concession to consequentialism.


2019 ◽  
Vol 177 (12) ◽  
pp. 3595-3614
Author(s):  
Jessica Brown

AbstractWhen subjects violate epistemic standards or norms, we sometimes judge them blameworthy rather than blameless. For instance, we might judge a subject blameworthy for dogmatically continuing to believe a claim even after receiving evidence which undermines it. Indeed, the idea that one may be blameworthy for belief is appealed to throughout the contemporary epistemic literature. In some cases, a subject seems blameworthy for believing as she does even though it seems prima facie implausible that she is morally blameworthy or professionally blameworthy. Such cases raise the question of whether one can be blameworthy for a belief in a specifically epistemic sense rather than in some already recognised sense, such as being morally or professionally blameworthy. A number of authors have recently argued that there is a moral or social sense in which one ought to conform one’s beliefs to the evidence (e.g. Goldberg, Graham, Vanderheiden). In this paper, I argue that even while accepting that there are moral and social norms governing belief, there are cases in which a subject is blameworthy for a belief but isn’t plausibly morally or socially blameworthy. If this latter view is correct, then we may need to develop a new account of blame which can be applied to beliefs which are not morally or socially blameworthy.


2005 ◽  
Vol 69 (1) ◽  
pp. 229-246 ◽  
Author(s):  
Peter Baumann

Most contextualists agree that contexts differ with respect to relevant epistemic standards. In this paper, I discuss the idea that the difference between more modest and stricter standards should be explained in terms of the closeness or remoteness of relevant possible worlds. I argue that there are serious problems with this version of contextualism. In the second part of the paper, I argue for another form of contextualism that has little to do with standards and a lot with the well-known problem of the reference class. This paper also illustrates the fact that contextualism comes in many varieties.


Sign in / Sign up

Export Citation Format

Share Document