Sample Complexity Bounds for RNNs with Application to Combinatorial Graph Problems

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Learning to predict solutions to real-valued combinatorial graph problems promises efficient approximations. As demonstrated based on the NP-hard edge clique cover number, recurrent neural networks (RNNs) are particularly suited for this task and can even outperform state-of-the-art heuristics. However, the theoretical framework for estimating real-valued RNNs is understood only poorly. As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs. While such derivations have been made earlier for feed-forward and convolutional neural networks, our work presents the first such attempt for recurrent neural networks. Given a single-layer RNN with a rectified linear units and input of length b, we show that a population prediction error of ϵ can be realized with at most Õ(a4b/ϵ2) samples.1 We further derive comparable results for multi-layer RNNs. Accordingly, a size-adaptive RNN fed with graphs of at most n vertices can be learned in Õ(n6/ϵ2), i. e., with only a polynomial number of samples. For combinatorial graph problems, this provides a theoretical foundation that renders RNNs competitive.

Cite

CITATION STYLE

APA

Akpinar, N. J., Kratzwald, B., & Feuerriegel, S. (2020). Sample Complexity Bounds for RNNs with Application to Combinatorial Graph Problems. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13745–13746). AAAI press.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free