One-shot learning with feedback for multi-layered convolutional network

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes an improved add-if-silent rule, which is suited for training intermediate layers of a multi-layered convolutional network, such as a neocognitron. By the add-if-silent rule, a new cell is generated if all postsynaptic cells are silent. The generated cell learns the activity of the presynaptic cells in one-shot, and its input connections will never be modified afterward. To use this learning rule for a convolutional network, it is required to decide at which retinotopic location this rule is to be applied. In the conventional add-if-silent rule, we chose the location where the activity of presynaptic cells is the largest. In the proposed new learning rule, a negative feedback is introduced from postsynaptic cells to presynaptic cells, and a new cell is generated at the location where the presynaptic activity fails to be suppressed by the feedback. We apply this learning rule to a neocognitron for hand-written digit recognition, and demonstrate the decrease in the recognition error. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Fukushima, K. (2014). One-shot learning with feedback for multi-layered convolutional network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 291–298). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free