The conjunction fallacy: explanations of the linda problem by the theory of hints

2003 ◽  
Vol 18 (1) ◽  
pp. 75-91 ◽  
Author(s):  
Hans Wolfgang Brachinger ◽  
Paul-Andr� Monney
Synthese ◽  
2009 ◽  
Vol 184 (1) ◽  
pp. 13-27 ◽  
Author(s):  
Jonah N. Schupbach
Keyword(s):  

2014 ◽  
Vol 28 (2) ◽  
pp. 238-248 ◽  
Author(s):  
Robert Brotherton ◽  
Christopher C. French

2019 ◽  
Vol 109 (4) ◽  
pp. 853-898 ◽  
Author(s):  
Johannes Fürnkranz ◽  
Tomáš Kliegr ◽  
Heiko Paulheim

AbstractIt is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that—all other things being equal—longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowdsourcing study based on about 3000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recognition heuristic, and investigate their relation to rule length and plausibility.


Sign in / Sign up

Export Citation Format

Share Document