Interpolating convolutional neural networks using batch normalization

1Citations
Citations of this article
104Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Perceiving a visual concept as a mixture of learned ones is natural for humans, aiding them to grasp new concepts and strengthening old ones. For all their power and recent success, deep convolutional networks do not have this ability. Inspired by recent work on universal representations for neural networks, we propose a simple emulation of this mechanism by purposing batch normalization layers to discriminate visual classes, and formulating a way to combine them to solve new tasks. We show that this can be applied for 2-way few-shot learning where we obtain between 4% and 17% better accuracy compared to straightforward full fine-tuning, and demonstrate that it can also be extended to the orthogonal application of style transfer.

Cite

CITATION STYLE

APA

Data, G. W. P., Ngu, K., Murray, D. W., & Prisacariu, V. A. (2018). Interpolating convolutional neural networks using batch normalization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11217 LNCS, pp. 591–606). Springer Verlag. https://doi.org/10.1007/978-3-030-01261-8_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free