Bounding vc-dimension for neural networks: Progress and prospects

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Techniques from differential topology are used to give polynomial bounds for the VC-dimension of sigmoidal neural networks. The bounds are quadratic in ω, the dimension of the space of weights. Similar results are obtained for a wide class of Pfaffian activation functions. The obstruction (in differential topology) to improving the bound to an optimal bound O(ωlogω) is discussed, and attention is paid to the role of other parameters involved in the network architecture.

Cite

CITATION STYLE

APA

Karpinski, M., & Macintyre, A. (1995). Bounding vc-dimension for neural networks: Progress and prospects. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 904, pp. 337–341). Springer Verlag. https://doi.org/10.1007/3-540-59119-2_189

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free