Video object graph: A novel semantic level representation for videos

Author(s):  
Xin Feng ◽  
Yuanyi Xue ◽  
Yao Wang
2014 ◽  
Vol 36 (11) ◽  
pp. 2356-2363
Author(s):  
Zong-Min LI ◽  
Xu-Chao GONG ◽  
Yu-Jie LIU

2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX


2020 ◽  
Author(s):  
XiaoQing Bu ◽  
YuKuan Sun ◽  
JianMing Wang ◽  
KunLiang Liu ◽  
JiaYu Liang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document