invalid inference
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 1)

Analysis ◽  
2020 ◽  
Vol 80 (3) ◽  
pp. 426-433
Author(s):  
Johan E Gustafsson

Abstract Daniel C. Dennett has long maintained that the Consequence Argument for incompatibilism is confused. In a joint work with Christopher Taylor, he claims to have shown that the argument is based on a failure to understand Logic 101. Given a fairly plausible account of having the power to cause something, they claim that the argument relies on an invalid inference rule. In this paper, I show that Dennett and Taylor’s refutation does not work against a better, more standard version of the Consequence Argument. Therefore, Dennett and Taylor’s alleged refutation fails.


2019 ◽  
Vol 6 (10) ◽  
pp. 190937 ◽  
Author(s):  
Melissa Bateson ◽  
Dan T. A. Eisenberg ◽  
Daniel Nettle

Longitudinal studies have sought to establish whether environmental exposures such as smoking accelerate the attrition of individuals' telomeres over time. These studies typically control for baseline telomere length (TL) by including it as a covariate in statistical models. However, baseline TL also differs between smokers and non-smokers, and telomere attrition is spuriously linked to baseline TL via measurement error and regression to the mean. Using simulated datasets, we show that controlling for baseline TL overestimates the true effect of smoking on telomere attrition. This bias increases with increasing telomere measurement error and increasing difference in baseline TL between smokers and non-smokers. Using a meta-analysis of longitudinal datasets, we show that as predicted, the estimated difference in telomere attrition between smokers and non-smokers is greater when statistical models control for baseline TL than when they do not, and the size of the discrepancy is positively correlated with measurement error. The bias we describe is not specific to smoking and also applies to other exposures. We conclude that to avoid invalid inference, models of telomere attrition should not control for baseline TL by including it as a covariate. Many claims of accelerated telomere attrition in individuals exposed to adversity need to be re-assessed.


Author(s):  
Graham Priest

We are strongly intuitive about the validity of various inferences. This may or may not be hard-wired into us. Intuition gets us into trouble sometimes. ‘Truth functions—or not?’ plays around with the idea of invalid and valid inferences and, using phrases such as ‘or’ or ‘it is not the case that’, shows that chaining valid inferences together doesn’t necessarily result in an invalid inference. People’s intuition can be misleading. Sentences can be true or false. Gottlob Frege called these truth values. If we assume that every sentence is either true or false, but not both, we can predict the conditions in what logicians call a truth table.


2012 ◽  
Vol 2012 ◽  
pp. 1-19 ◽  
Author(s):  
Getachew A. Dagne ◽  
Yangxin Huang

Complex longitudinal data are commonly analyzed using nonlinear mixed-effects (NLME) models with a normal distribution. However, a departure from normality may lead to invalid inference and unreasonable parameter estimates. Some covariates may be measured with substantial errors, and the response observations may also be subjected to left-censoring due to a detection limit. Inferential procedures can be complicated dramatically when such data with asymmetric characteristics, left censoring, and measurement errors are analyzed. There is relatively little work concerning all of the three features simultaneously. In this paper, we jointly investigate a skew-tNLME Tobit model for response (with left censoring) process and a skew-tnonparametric mixed-effects model for covariate (with measurement errors) process under a Bayesian framework. A real data example is used to illustrate the proposed methods.


2010 ◽  
Vol 3 (2) ◽  
pp. 175-227 ◽  
Author(s):  
PETER MILNE

Various natural deduction formulations of classical, minimal, intuitionist, and intermediate propositional and first-order logics are presented and investigated with respect to satisfaction of the separation and subformula properties. The technique employed is, for the most part, semantic, based on general versions of the Lindenbaum and Lindenbaum–Henkin constructions. Careful attention is paid (i) to which properties of theories result in the presence of which rules of inference, and (ii) to restrictions on the sets of formulas to which the rules may be employed, restrictions determined by the formulas occurring as premises and conclusion of the invalid inference for which a counterexample is to be constructed. We obtain an elegant formulation of classical propositional logic with the subformula property and a singularly inelegant formulation of classical first-order logic with the subformula property, the latter, unfortunately, not a product of the strategy otherwise used throughout the article. Along the way, we arrive at an optimal strengthening of the subformula results for classical first-order logic obtained as consequences of normalization theorems by Dag Prawitz and Gunnar Stålmarck.


2008 ◽  
pp. 29-42
Author(s):  
Jüri Eintalu

Loogikavigu on kahte liiki: (1) vastuolud; (2) kehtetud järeldused. Sooritada kehtetu järeldus on üldiselt väiksem viga kui väita vastuolu. On kaks erinevat printsiipi: (A) teadaolevalt väära väidet ei tohi esitada; (B) ei tohi esitada väidet, mille tõeväärtus pole teada. (A) keelab sooritada esimest liiki loogikavigu; (B) ka teist liiki loogikavigu. Printsiibi (B) lükkame me tagasi. Oletusi saab genereerida näiteks: (1) dedutseerides neid teadaolevalt vastuolulisest teooriast; (2) saades neid induktiivsete järelduste tulemeina. Ent ka printsiipi (A) tohib eirata. Võib esitada teadaolevalt vastuolulise teooria kui ligilähedaselt tõese ning seda kasutada - kui paremaid pole käepärast. Popper esitas tõesarnasuse teooria. Samuti nõudis ta vastuoluliste teadusteooriate elimineerimist. - Ent paremate teadusteooriate leiutamine võib ebaõnnestuda. Wittgensteini vaated tema Märkmetes matemaatika alustest on usutavamad. Implitsiitselt rakendas Witttgenstein mitteformaliseeritud tõesarnasuse mõistet vastuolulistele teooriatele, lubades selliseid teooriaid kasutada.There are two kinds of logical mistakes: (1) contradictions; (2) invalid inferences. Generally, to commit an invalid inference is a lesser mistake than to assert a contradiction. There are two principles: (A) it is prohibited to present a proposition one knows to be false; (B) it is prohibited to present a proposition one does not know to be true. (A) prohibits to commit logical mistakes of the first kind; (B) of the second kind as well. We reject the principle (B). One can generate guesses e.g. (1) by deducing them from a theory one knows to be contradictory; (2) as the conclusions of inductions. It is allowable to violate the principle (A) too. One can present a theory one knows to be contradictory as approximately true and to use it - if no better ones are at hand. Popper presented a theory of truthlikeness. He also demanded to eliminate contradictions from our theories. - However, one may fail to invent better theories. Wittgenstein's views in Remarks on the Foundations of Mathematics are more plausible. Witttgenstein implicitly applied a non-formalized concept of truthlikeness to contradictory theories and allowed to use them.


2003 ◽  
Vol 23 (3) ◽  
Author(s):  
Matthew Zuckero

Lawrence Powers advocates a one-fallacy theory in which the only real fallacies are fallacies of ambiguity. He defines a fallacy, in general, as a bad argument that appears good. He claims that the only legitimate way that an argument can appear valid, while being invalid, is when the invalid inference is covered by an ambiguity. Several different kinds of counterexamples have been offered from begging the question, to various forms of ad hominem fallacies. In this paper, I outline three potential counterexamples to Powers' theory, including one that has been addressed already by Powers, and two which are well known problems, but until now have never been applied as counterexamples to Powers' theory. I argue that there is a simpler explanation of these 'hard' cases than positing ambiguities that are not obviously there.


1995 ◽  
Vol 52 (6) ◽  
pp. 1274-1285 ◽  
Author(s):  
Ransom A. Myers ◽  
Noel G. Cadigan

The collapse of the northern Atlantic cod (Gadus morhua) fishery off southern Labrador and to the northern Grand Bank of Newfoundland, once the largest cod fishery in the world, was a social and economic disaster for the region. An analysis of traditional catch-at-age data in conjunction with research surveys, which assumed that research survey estimation errors of abundance by age and year were independent, led assessment biologists to the conclusion that the collapse was caused by an increase in natural mortality in the first half of 1991. We constructed a statistical model to test this hypothesis. The results do not support the hypothesis. There is ambiguous evidence that natural mortality has increased since 1991; however, these results are found only in a model that has extraordinary patterns in the residuals. Our analysis suggests that even if natural mortality has been higher in recent years (as estimated using a model with correlated errors for research surveys), overfishing was sufficiently high to cause a collapse of this population. We also demonstrate that the usual assumption that estimation errors from research trawl surveys are independent is not valid, and can lead to invalid inference and unreasonable estimates of abundance.


Sign in / Sign up

Export Citation Format

Share Document