Explicit Computation of Input Weights in Extreme Learning Machines

  • Tapson J
  • de Chazal P
  • van Schaik A
N/ACitations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a closed form expression for initializing the input weights in a multi-layer perceptron, which can be used as the first step in synthesis of an Extreme Learning Ma-chine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the absence of supervised training for the input weights, random linear combinations of training data samples are used to project the input data to a higher dimensional hidden layer. The hidden layer weights are solved in the standard ELM fashion by computing the pseudoinverse of the hidden layer outputs and multiplying by the desired output values. All weights for this method can be computed in a single pass, and the resulting networks are more accurate and more consistent on some standard problems than regular ELM networks of the same size.

Cite

CITATION STYLE

APA

Tapson, J., de Chazal, P., & van Schaik, A. (2015). Explicit Computation of Input Weights in Extreme Learning Machines (pp. 41–49). https://doi.org/10.1007/978-3-319-14063-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free