Deep network approximation characterized by number of neurons

100Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper quantitatively characterizes the approximation power of deep feed-forward neural networks (FNNs) in terms of the number of neurons. It is shown by construction that ReLU FNNs with width O(max{d⌊N1/d⌋, N+1}) and depth O(L) can approximate an arbitrary Hölder continuous function of order α∈ (0,1] on [0,1]d with a nearly tight approximation rate O(√dN−2α/dL−2α/d) measured in Lp-norm for any N,L ∈ N+ and p ∈ [1,∞]. More generally for an arbitrary continuous function f on [0,1]d with a modulus of continuity ωf(·), the constructive approximation rate is O(√dωf(N−2/dL−2/d)). We also extend our analysis to f on irregular domains or those localized in an ε-neighborhood of a dM-dimensional smooth manifold M⊆[0,1]d with dM ≪ d. Especially, in the case of an essentially low-dimensional domain, we √d show an approximation rate O(ωf(1−εδqddδ +ε)+√dωf((1−δ)√dδ N−2/dδL−2/dδ)) for ReLU FNNs to approximate f in the ε-neighborhood, where dδ=O(dMln(d/δ)) for any δ2 δ∈(0,1) as a relative error for a projection to approximate an isometry when projecting M to a dδ-dimensional domain.

Cite

CITATION STYLE

APA

Shen, Z., Yang, H., & Zhang, S. (2020). Deep network approximation characterized by number of neurons. Communications in Computational Physics, 28(5), 1768–1811. https://doi.org/10.4208/CICP.OA-2020-0149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free