wrong reason
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 17)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 9 (2) ◽  
pp. 74-87
Author(s):  
Ian Verstegen

Although J J Gibson’s theory of picture perception was often crude and biased toward naturalism, its fundamental division between the visual world and the visual field made it a semiotic theory. Contrariwise, although Arnheim wrote sensitively on pictures, he never seemed to admit that they were signs. This paper reviews both Gibson’s and Arnheim’s theories of picture perception, and explains where Arnheim’s biases caused him to lose the possibility of framing his approach in the most basic semiotic terms. Nevertheless, using the phenomenological semiotics of Sonesson and his theory of the Lifeworld Hierarchy, I demonstrate latent semiotic elements in Arnheim’s theory, due perhaps to Alfred Schutz’s influence. Hoping to argue against the brute theory of denotation, Arnheim instead sought to delay invocation of (conventional) signs as long as possible, and his idea of iconic pictorialization assumes but does not name signification. Nevertheless, I propose that Arnheim has a kind of theory of the Lifeworld Hierarchy inside the picture. Thus, he (wrongly) does not see the picture as overtly signifying but interestingly gives hints about how to treat the objects of the virtual world of the picture based on their relationship to the overall style of the work.


Synthese ◽  
2021 ◽  
Author(s):  
Max Lewis

AbstractThe simple knowledge norm of assertion (SKNA) holds that one may (epistemically permissibly) assert that p only if one knows that p. Turri (Aust J Philos 89(1):37–45, 2011) and Williamson (Knowledge and its limits, Oxford University Press, Oxford, 2000) both argue that more is required for epistemically permissible assertion. In particular, they both think that the asserter must assert on the basis of her knowledge. Turri calls this the express knowledge norm of assertion (EKNA). I defend SKNA and argue against EKNA. First, I argue that EKNA faces counterexamples. Second, I argue that EKNA assumes an implausible view of permissibility on which an assertion is epistemically permissible only if it is made for a right reason, i.e., a reason that contributes to making it the case that it is epistemically permissible to make that assertion. However, the analogous view in other normative domains is both controversial and implausible. This is because it doesn’t make it possible for one to act or react rightly for the wrong reason. I suggest that proponents of EKNA have conflated requirements for φ-ing rightly (or permissibly) with requirements for φ-ing well. Finally, I argue that proponents of SKNA can explain the intuitive defectiveness of asserting on the basis of an epistemically bad reason (e.g., a random guess), even when the asserters know the content of their assertion, by arguing that the asserters are epistemically blameworthy.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dávid Péter Kovács ◽  
William McCorkindale ◽  
Alpha A. Lee

AbstractOrganic synthesis remains a major challenge in drug discovery. Although a plethora of machine learning models have been proposed as solutions in the literature, they suffer from being opaque black-boxes. It is neither clear if the models are making correct predictions because they inferred the salient chemistry, nor is it clear which training data they are relying on to reach a prediction. This opaqueness hinders both model developers and users. In this paper, we quantitatively interpret the Molecular Transformer, the state-of-the-art model for reaction prediction. We develop a framework to attribute predicted reaction outcomes both to specific parts of reactants, and to reactions in the training set. Furthermore, we demonstrate how to retrieve evidence for predicted reaction outcomes, and understand counterintuitive predictions by scrutinising the data. Additionally, we identify Clever Hans predictions where the correct prediction is reached for the wrong reason due to dataset bias. We present a new debiased dataset that provides a more realistic assessment of model performance, which we propose as the new standard benchmark for comparing reaction prediction models.


2020 ◽  
Vol 20 (24) ◽  
pp. 16023-16040
Author(s):  
Kine Onsum Moseid ◽  
Michael Schulz ◽  
Trude Storelvmo ◽  
Ingeborg Rian Julsrud ◽  
Dirk Olivié ◽  
...  

Abstract. Anthropogenic aerosol emissions have increased considerably over the last century, but climate effects and quantification of the emissions are highly uncertain as one goes back in time. This uncertainty is partly due to a lack of observations in the pre-satellite era, making the observations we do have before 1990 additionally valuable. Aerosols suspended in the atmosphere scatter and absorb incoming solar radiation and thereby alter the Earth's surface energy balance. Previous studies show that Earth system models (ESMs) do not adequately represent surface energy fluxes over the historical era. We investigated global and regional aerosol effects over the time period 1961–2014 by looking at surface downwelling shortwave radiation (SDSR). We used observations from ground stations as well as multiple experiments from eight ESMs participating in the Coupled Model Intercomparison Project Version 6 (CMIP6). Our results show that this subset of models reproduces the observed transient SDSR well in Europe but poorly in China. We suggest that this may be attributed to missing emissions of sulfur dioxide in China, sulfur dioxide being a precursor to sulfate, which is a highly reflective aerosol and responsible for more reflective clouds. The emissions of sulfur dioxide used in the models do not show a temporal pattern that could explain observed SDSR evolution over China. The results from various aerosol emission perturbation experiments from DAMIP, RFMIP and AerChemMIP show that only simulations containing anthropogenic aerosol emissions show dimming, even if the dimming is underestimated. Simulated clear-sky and all-sky SDSR do not differ greatly, suggesting that cloud cover changes are not a dominant cause of the biased SDSR evolution in the simulations. Therefore we suggest that the discrepancy between modeled and observed SDSR evolution is partly caused by erroneous aerosol and aerosol precursor emission inventories. This is an important finding as it may help interpret whether ESMs reproduce the historical climate evolution for the right or wrong reason.


2020 ◽  
Vol 6 (4) ◽  
Author(s):  
Cynthia A. Stark

Luck egalitarianism has been criticized for (1) condoning some cases of oppression and (2) condemning others for the wrong reason—namely, that the victims were not responsible for their oppression. Oppression is unjust, however, the criticism says, regardless of whether victims are responsible for it, simply because it is contrary to the equal moral standing of persons. I argue that four luck egalitarian responses to this critique are inadequate. Two address only the first part of the objection and do so in a way that risks making luck egalitarianism inconsistent. A third severely dilutes the luck egalitarian doctrine. A fourth manages to denounce some instances of oppression for the right reason, but at the same time permits other instances of oppression and condemns yet others for the wrong reason.


2020 ◽  
Author(s):  
David Peter Kovacs ◽  
William McCorkindale ◽  
Alpha Lee

<div><div><div><p>Organic synthesis remains a stumbling block in drug discovery. Although a plethora of machine learning models have been proposed as solutions in the literature, they suffer from being opaque black-boxes. It is neither clear if the models are making correct predictions because they inferred the salient chemistry, nor is it clear which training data they are relying on to reach a prediction. This opaqueness hinders both model developers and users. In this paper, we quantitatively interpret the Molecular Transformer, the state-of-the-art model for reaction prediction. We develop a framework to attribute predicted reaction outcomes both to specific parts of reactants, and to reactions in the training set. Furthermore, we demonstrate how to retrieve evidence for predicted reaction outcomes, and understand counterintuitive predictions by scrutinising the data. Additionally, we identify ”Clever Hans” predictions where the correct prediction is reached for the wrong reason due to dataset bias. We present a new debiased dataset that provides a more realistic assessment of model performance, which we propose as the new standard benchmark for comparing reaction prediction models.</p></div></div></div>


2020 ◽  
Author(s):  
David Peter Kovacs ◽  
William McCorkindale ◽  
Alpha Lee

<div><div><div><p>Organic synthesis remains a stumbling block in drug discovery. Although a plethora of machine learning models have been proposed as solutions in the literature, they suffer from being opaque black-boxes. It is neither clear if the models are making correct predictions because they inferred the salient chemistry, nor is it clear which training data they are relying on to reach a prediction. This opaqueness hinders both model developers and users. In this paper, we quantitatively interpret the Molecular Transformer, the state-of-the-art model for reaction prediction. We develop a framework to attribute predicted reaction outcomes both to specific parts of reactants, and to reactions in the training set. Furthermore, we demonstrate how to retrieve evidence for predicted reaction outcomes, and understand counterintuitive predictions by scrutinising the data. Additionally, we identify ”Clever Hans” predictions where the correct prediction is reached for the wrong reason due to dataset bias. We present a new debiased dataset that provides a more realistic assessment of model performance, which we propose as the new standard benchmark for comparing reaction prediction models.</p></div></div></div>


2020 ◽  
Vol 101 (7) ◽  
pp. E993-E1006
Author(s):  
Anders Persson

Abstract There are at least three popular perceptions surrounding the weather forecast for the D-day landing in Normandy, 6 June 1994: 1) that the Allied weather forecasters predicted a crucial break or “window of opportunity” in the unsettled weather prevailing at the time; 2) that the German meteorologists, lacking observations from the North Atlantic, failed to see this break coming and thus the invasion took the Wehrmacht by surprise; and 3) that the American forecasters, guided by a skillful analog system, predicted the favorable conditions several days ahead but got no support from their pessimistic British colleagues. This article will present evidence taken mostly from hitherto rather neglected sources of information, transcripts of the telephone discussions between the Allied forecasters and archived German weather analyses. They show that 1) the synoptic development for the invasion was not particularly well predicted and, if there was a break in the weather, it occurred for reasons other than those predicted; 2) the German forecasters were fairly well informed about the large-scale synoptic situation over most of the North Atlantic, probably thanks to decoded American analyses; and 3) from the viewpoint of a “neutral Swede,” the impression is that the American analog method might not have performed as splendidly as its adherents have claimed, but also not as badly as its critics have alleged. Finally, the D-day forecast, the discussions among the forecasters, and their briefings with the Allied command are interesting not only from a historical perspective, but also as an early and well-documented example of decision-making under meteorological uncertainty.


2020 ◽  
Author(s):  
Kine Onsum Moseid ◽  
Michael Schulz ◽  
Trude Storelvmo ◽  
Ingeborg Rian Julsrud ◽  
Dirk Olivié ◽  
...  

&lt;p&gt;Anthropogenic aerosol emissions have increased considerably over the last century, but climate effects and quantification of the emissions are highly uncertain as one goes back in time. This uncertainty is partly due to a lack of observations in the pre-satellite era, and previous studies show that Earth system models (ESMs) do not adequately represent surface energy fluxes over the historical era. We investigated global and regional aerosol effects over the time period 1961-2014 by looking at surface downwelling shortwave radiation (SDSR).&lt;br&gt;We used observations from ground stations as well as multiple experiments from five ESMs participating in the Coupled Model Intercomparison Project Version 6 (&lt;em&gt;CMIP6&lt;/em&gt;). Our results show that this subset of models reproduces the observed transient SDSR well in Europe, but poorly in China.&amp;#160;&lt;br&gt;The models do not reproduce the observed trend reversal in SDSR in China in the late 1980s, which is attributed to a change in the emission of SO&lt;sub&gt;2&lt;/sub&gt; in this region. The emissions of SO&lt;sub&gt;2&lt;/sub&gt; show no sign of a trend reversal that could explain the observed SDSR evolution over China, and neither do other aerosols relevant to SDSR. The results from various aerosol emission perturbation experiments from &lt;em&gt;DAMIP&lt;/em&gt;, &lt;em&gt;RFMIP&lt;/em&gt; and &lt;em&gt;AerChemMIP&lt;/em&gt; suggest that its likely, that aerosol effects are responsible for the dimming signal, although not its full amplitude. Simulated cloud cover changes in the different models are not correlated with observed changes over China.&amp;#160; Therefore we suggest that the discrepancy between modeled and observed SDSR evolution is partly caused by erroneous aerosol and aerosol precursor emission inventories. This is an important finding as it may help interpreting whether ESMs reproduce the historical climate evolution for the right or wrong reason.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document