Improving transferability of deep neural networks

1Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning from small amounts of labeled data is a challenge in the area of deep learning. This is currently addressed by Transfer Learning, where one learns the small dataset as a transfer task from a larger source dataset. Transfer Learning can deliver higher accuracy if the hyperparameters and source dataset are chosen well. One of the important parameters is the learning rate for the layers of the neural network. We show through experiments on the ImageNet22k and Oxford Flowers datasets that improvements in accuracy in range of 127% can be obtained by proper choice of learning rates. We also show that the images/label parameter for a dataset can potentially be used to determine optimal learning rates for the layers to get the best overall accuracy. We additionally validate this method on a sample of real-world image classification tasks from a public visual recognition API.

Cite

CITATION STYLE

APA

Dube, P., Bhattacharjee, B., Petit-Bois, E., & Hill, M. (2020). Improving transferability of deep neural networks. In Domain Adaptation for Visual Understanding (pp. 51–64). Springer International Publishing. https://doi.org/10.1007/978-3-030-30671-7_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free