Introduces a computational unit for feedforward learning networks of the backpropagation type, which calculates a weighted product in which each input is raised to a power determined by a variable weight. The unit can learn to represent generalized polynomial terms in the inputs and can form a better representation of data in cases where higher order combinations of inputs are significant without increasing the number of free parameters. Product units can be trained using gradient descent, allow simpler solutions to standard learning problems, have a higher empirical learning capacity than summing units, and act to create a hidden layer representation for an output summing unit. A neurobiological interpretation is presented for this combination of product and summing units in terms of a single neuron.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below