crucial assumption
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 12)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
pp. 1-30
Author(s):  
Robert Long

Abstract As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular “fairness measures” are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a “fairness tradeoff” between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, philosophers have seldom examined this crucial assumption, and examined to what extent each measure actually tracks a normatively important property. This makes this inevitable statistical conflict – between calibration and false positive rate equality – an important topic for ethics. In this paper, I give an ethical framework for thinking about these measures and argue that, contrary to initial appearances, false positive rate equality is in fact morally irrelevant and does not measure fairness.


2021 ◽  
Vol 19 (2) ◽  
Author(s):  
Adam F. Gibbons

Despite their many virtues, democracies suffer from well-known problems with high levels of voter ignorance. Such ignorance, one might think, leads democracies to occasionally produce bad outcomes. Proponents of epistocracy claim that allocating comparatively greater amounts of political power to citizens who possess more politically relevant knowledge may help us to mitigate the bad effects of voter ignorance. In a recent paper, Julian Reiss challenges a crucial assumption underlying the case for epistocracy. Central to any defence of epistocracy is the conviction that we can identify a body of political knowledge which, when possessed in greater amounts by voters, leads to substantively better outcomes than when voters lack such knowledge. But it is not possible to identify such a body of knowledge. There is simply far too much controversy in the social sciences, and this controversy prevents us from definitively saying of some citizens that they possess more politically relevant knowledge than others. Call this the Argument from Political Disagreement. In this paper I respond to the Argument from Political Disagreement. First, I argue that Reiss conflates social-scientific knowledge with politically relevant knowledge. Even if there were no uncontroversial social-scientific knowledge, there is much uncontroversial politically relevant knowledge. Second, I argue that there is some uncontroversial social-scientific knowledge. While Reiss correctly notes that there is much controversy in the social sciences, not every issue is controversial. The non-social-scientific politically relevant knowledge and the uncontroversial social-scientific knowledge together constitute the minimal body of knowledge which epistocrats need to make their case. 


Econometrica ◽  
2021 ◽  
Vol 89 (2) ◽  
pp. 563-589
Author(s):  
Laurent Bartholdi ◽  
Wade Hann-Caruthers ◽  
Maya Josyula ◽  
Omer Tamuz ◽  
Leeat Yariv

May's theorem (1952), a celebrated result in social choice, provides the foundation for majority rule. May's crucial assumption of symmetry, often thought of as a procedural equity requirement, is violated by many choice procedures that grant voters identical roles. We show that a weakening of May's symmetry assumption allows for a far richer set of rules that still treat voters equally. We show that such rules can have minimal winning coalitions comprising a vanishing fraction of the population, but not less than the square root of the population size. Methodologically, we introduce techniques from group theory and illustrate their usefulness for the analysis of social choice questions.


2020 ◽  
pp. 2150001
Author(s):  
Jorge Cruz López ◽  
Alfredo Ibáñez

In a default corridor [Formula: see text] that the stock price can never enter, a deep out-of-the-money American put option replicates a pure credit contract (Carr and Wu, 2011, A Simple Robust Link between American Puts and Credit Protection, Review of Financial Studies 24, 473–505). Assuming discrete (one-period-ahead predictable) cash flows, we show that an endogenous credit-risk model generates, along with the default event, a default corridor at the cash-outflow dates, where [Formula: see text] is given by these outflows (i.e., debt service and negative earnings minus dividends). In this endogenous setting, however, the put replicating the credit contract is not American, but European. Specifically, the crucial assumption that determines an endogenous default corridor at the cash-outflow dates is that equityholders’ deep pockets absorb these outflows; that is, no equityholders’ fresh money, no endogenous corridor.


Zootaxa ◽  
2020 ◽  
Vol 4816 (3) ◽  
pp. 397-400
Author(s):  
JESSE M. MEIK ◽  
A. MICHELLE LAWING ◽  
JESSICA A. WATSON

Geometric morphometrics (GM) is a powerful analytical approach for evaluating phenotypic variation relevant to taxonomy and systematics, and as with any statistical methodology, requires adherence to fundamental assumptions for inferences to be strictly valid. An important consideration for GM is how landmark configurations, which represent sets of anatomical loci for evaluating shape variation through Cartesian coordinates, relate to underlying homology (Zelditch et al. 1995; Polly 2008). Perhaps more so than with traditional morphometrics, anatomical homology is a crucial assumption for GM because of the mathematical and biological interpretations associated with shape change depicted by deformation grids, such as the thin plate spline (Klingenberg 2008; Zelditch et al. 2012). GM approaches are often used to analyze shapes or outlines of structures, which are not necessarily related to common ancestry, and in this respect GM approaches that use linear semi-landmarks and related methods are particularly amenable to evaluating primary homology, or raw similarity between structures (De Pinna 1991; Palci & Lee 2019). This relaxed interpretation of homology that focuses more on recognizable and repeatable landmarks is defensible so long as authors are clear regarding the purpose of the analyses and in defining their landmark configurations (Palci & Lee 2019). Secondary homology, or similarity due to common ancestry, can also be represented with GM methods and is often assumed to be reflected in fixed Type 1 (juxtaposition of tissues) or Type 2 (self-evident geometry) landmarks (Bookstein 1991). 


2020 ◽  
pp. 1-45
Author(s):  
Ajay Shenoy

Behind many production function estimators lies a crucial assumption that the firm's choice of intermediate inputs depends only on observed choices of other inputs and on unobserved productivity. This assumption fails when market frictions distort the firm's input choices. I derive a test for the assumption, which is rejected in several industries. I show, using weak identification asymptotics, that when the assumption fails a simplified dynamic panel estimator can be used instead of choice-based methods because it requires choices to be distorted. I propose criteria for choosing between estimators, which in simulations yields lower error than either estimator alone.


2020 ◽  
Author(s):  
Peter Lush

Reports of experiences of ownership over a fake hand following simple multisensory stimulation (the ‘rubber hand illusion’) have generated an expansive literature. Because such reports might reflect suggestion effects, demand characteristics are routinely controlled for by contrasting agreement ratings for ‘illusion’ and ‘control’ conditions. However, these methods have never been validated, and recent evidence that response to imaginative suggestion (‘phenomenological control’) predicts illusion report prompts reconsideration of their efficacy. A crucial assumption of the standard approach is that demand characteristics are matched across conditions. Here, a quasi-experiment design was employed to test demand characteristics in rubber hand illusion reports. Participants were provided with information about the rubber hand illusion procedure (text description and video demonstration) and recorded expectancies for standard ‘illusion’ and ‘control’ statements. Expectancies for control and illusion statements in synchronous and asynchronous conditions were found to differ similarly to published illusion reports. Therefore, rubber hand illusion control methods which have been in use for 22 years are not fit for purpose. Because demand characteristics have not been controlled in illusion report in existing studies, the illusion may be, partially or entirely, a suggestion effect. Methods to develop robust controls are proposed. That confounding demand characteristics have been overlooked for decades may be attributable to a lack of awareness that demand characteristics can drive experience in psychological science.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Peter Lush

Reports of experiences of ownership over a fake hand following simple multisensory stimulation (the ‘rubber hand illusion’) have generated an expansive literature. Because such reports might reflect suggestion effects, demand characteristics are routinely controlled for by contrasting agreement ratings for ‘illusion’ and ‘control’ conditions. However, these methods have never been validated, and recent evidence that response to imaginative suggestion (‘phenomenological control’) predicts illusion report prompts reconsideration of their efficacy. A crucial assumption of the standard approach is that demand characteristics are matched across conditions. Here, a quasi-experiment design was employed to test demand characteristics in rubber hand illusion reports. Participants were provided with information about the rubber hand illusion procedure (text description and video demonstration) and recorded expectancies for standard ‘illusion’ and ‘control’ statements. Expectancies for ‘control’ and ‘illusion’ statements in synchronous and asynchronous conditions were found to differ similarly to published illusion reports. Therefore, rubber hand illusion control methods which have been in use for 22 years are not fit for purpose. Because demand characteristics have not been controlled in illusion report in existing studies, the illusion may be, partially or entirely, a suggestion effect. Methods to develop robust controls are proposed. That confounding demand characteristics have been overlooked for decades may be attributable to a lack of awareness that demand characteristics can drive experience in psychological science.


2019 ◽  
Vol 5 (2) ◽  
pp. 91-105 ◽  
Author(s):  
Olga Steriopolo

Abstract This work presents a study of mixed gender agreement in the case of hybrid nouns in Russian. Examination of a number of approaches which seek to account for the category “gender” shows that these approaches are problematic when trying to explain mixed gender agreement in hybrid nouns. It is proposed here that the multiple-layer DP-hypothesis by Zamparelli (1995 and subsequent work) is best suited to analyze the Russian data. However, this rests on the crucial assumption that Russian demonstratives can occupy multiple positions within the DP, something that must still be verified by future work.


2019 ◽  
Vol 186 (2-3) ◽  
pp. 301-305
Author(s):  
Martin Listjak ◽  
Alojz Slaninka ◽  
Vladimír Nečas

Abstract Uncertainty analysis for nondestructive estimation of contamination depth is presented. The contamination depth was determined using the peak-to-peak method as an in-situ measurement in which gamma spectra were measured by an HPGe detector. Since exponential activity distribution is a crucial assumption of this method, the distribution profile was confirmed by laboratory tests of core drill samples. The main parameter influencing uncertainty of contamination depth is uncertainty of relaxation length. The uncertainty is composed for statistical error represented by the ratio of net peak areas and systematic error given detection efficiency of measurement setup. Systematic relative error was evaluated to be 7.45%. Statistical relative error was evaluated to 9.97% for the proposed optimum net peak area. Variability of relaxation length was identified to be very low with mean value 2 mm with standard deviation 0.73 mm. For fixed relaxation length, it should be possible to estimate contamination depth by nonspectrometric devices.


Sign in / Sign up

Export Citation Format

Share Document