Relu networks are universal approximators via piecewise linear or constant functions

27Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function f (x), we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to f (x). For multivariate function f (x), ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating f (x). A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.

Cite

CITATION STYLE

APA

Huang, C. (2020, November 1). Relu networks are universal approximators via piecewise linear or constant functions. Neural Computation. MIT Press Journals. https://doi.org/10.1162/neco_a_01316

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free