Harmonic analysis of neural networks

227Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It is known that superpositions of ridge functions (single hidden-layer feedforward neural networks) may give good approximations to certain kinds of multivariate functions. It remains unclear, however, how to effectively obtain such approximations. In this paper, we use ideas from harmonic analysis to attack this question. We introduce a special admissibility condition for neural activation functions. The new condition is not satisfied by the sigmoid activation in current use by the neural networks community; instead, our condition requires that the neural activation function be oscillatory. Using an admissible neuron we construct linear transforms which represent quite general functions f as a superposition of ridge functions. We develop • a continuous transform which satisfies a Parseval-like relation; • a discrete transform which satisfies frame bounds. Both transforms represent f in a stable and effective way. The discrete transform is more challenging to construct and involves an interesting new discretization of time-frequency-direction space in order to obtain frame bounds for functions in L2(A) where A is a compact set of Rn. Ideas underlying these representations are related to Littlewood-Paley theory, wavelet analysis, and group representation theory. © 1999 Academic Press.

Cite

CITATION STYLE

APA

Candès, E. J. (1999). Harmonic analysis of neural networks. Applied and Computational Harmonic Analysis, 6(2), 197–218. https://doi.org/10.1006/acha.1998.0248

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free