Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

321Citations
Citations of this article
755Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

Cite

CITATION STYLE

APA

Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B., & Liao, Q. (2017, October 1). Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing. Chinese Academy of Sciences. https://doi.org/10.1007/s11633-017-1054-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free