A hidden fault line

Author(s):  
Rebecca Sutton
Keyword(s):  

2020 ◽  
Vol 1 (2) ◽  
pp. 2
Author(s):  
Syed Muhammad Imran Majeed ◽  
Rehma Gilani

The world is waking up to a new reality where an invisible enemy has made human contact contagious and exposed the long-hidden fault line of verisimilitude, blurring the lines between reality and paranoia. The global reaction is unprecedented with world economy virtually at a standstill. The economic fallout of the outbreak could trigger a recession of unparalleled scale. But is economics the only challenge? The pandemic is attacking societies at their core. The author of Sapiens, Yuval Noah Harari puts it well when he writes “The biggest danger is not the virus itself. Humanity has all the scientific knowledge and technological tools to overcome the virus. The really big problem is our own inner demons, our own hatred, greed and ignorance”



2013 ◽  
Vol 11 (2) ◽  
pp. 85-87
Author(s):  
Chris Pawson
Keyword(s):  


Author(s):  
Jiao Shangbin ◽  
Guo Jingwen ◽  
Li Yunjun ◽  
Xiao Ting




2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX



Author(s):  
Yikai Wang ◽  
Xin Yin ◽  
Wen Xu ◽  
Xianggen Yin ◽  
Minghao Wen ◽  
...  


Sign in / Sign up

Export Citation Format

Share Document