Neural information processing in hierarchical prototypical networks

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Prototypical networks (PTNs), which classify unseen data points according to their distances to the prototypes of classes, are a promising model to solve the few-shot learning problem. Mimicking the characteristics of neural systems, the present study extends PTNs in two aspects. Firstly, we develop hierarchical prototypical networks (HPTNs), which construct prototypes at all layers and minimize the weighted classification errors of all layers. Applied to two benchmark datasets, we show that a HPTN has comparable, or slightly better, performances than a PTN. We also find that after training, the HPTN generates good prototype representations in the intermediate layers of the network. Secondly, we demonstrate that the classification operation via distance computation in a PTN can be replaced approximately by the attracting dynamics of the Hopfield model, indicating the potential realization of metric-learning in neural systems. We hope this study establishes a link between PTNs and neural information processing.

Cite

CITATION STYLE

APA

Ji, Z., Zou, X., Liu, X., Huang, T., Mi, Y., & Wu, S. (2018). Neural information processing in hierarchical prototypical networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11303 LNCS, pp. 603–611). Springer Verlag. https://doi.org/10.1007/978-3-030-04182-3_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free