Review of "Discrete Neural Networks: A Theoretical Foundation by Kai-Yeung Siu, Vwani Roychowdhury and Thomas Kailath", Prentice Hall PTR, 1995, ISBN 0-13-300708-1.

1998 ◽  
Vol 9 (2) ◽  
pp. 29-30
Author(s):  
Vir V. Phoha
2020 ◽  
Vol 34 (10) ◽  
pp. 13745-13746
Author(s):  
Nil-Jana Akpinar ◽  
Bernhard Kratzwald ◽  
Stefan Feuerriegel

Learning to predict solutions to real-valued combinatorial graph problems promises efficient approximations. As demonstrated based on the NP-hard edge clique cover number, recurrent neural networks (RNNs) are particularly suited for this task and can even outperform state-of-the-art heuristics. However, the theoretical framework for estimating real-valued RNNs is understood only poorly. As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs. While such derivations have been made earlier for feed-forward and convolutional neural networks, our work presents the first such attempt for recurrent neural networks. Given a single-layer RNN with a rectified linear units and input of length b, we show that a population prediction error of ε can be realized with at most Õ(a4b/ε2) samples.1 We further derive comparable results for multi-layer RNNs. Accordingly, a size-adaptive RNN fed with graphs of at most n vertices can be learned in Õ(n6/ε2), i.,e., with only a polynomial number of samples. For combinatorial graph problems, this provides a theoretical foundation that renders RNNs competitive.


Sign in / Sign up

Export Citation Format

Share Document