Improving Convolutional Network Interpretability with Exponential Activations
Keyword(s):
AbstractDeep convolutional networks trained on regulatory genomic sequences tend to learn distributed representations of sequence motifs across many first layer filters. This makes it challenging to decipher which features are biologically meaningful. Here we introduce the exponential activation that – when applied to first layer filters – leads to more interpretable representations of motifs, both visually and quantitatively, compared to rectified linear units. We demonstrate this on synthetic DNA sequences which have ground truth with various convolutional networks, and then show that this phenomenon holds on in vivo DNA sequences.
2021 ◽
Keyword(s):
2021 ◽
Vol 2021
◽
pp. 1-11
2019 ◽
Keyword(s):
2013 ◽
Vol 41
(2)
◽
pp. 548-553
◽
Keyword(s):
Keyword(s):