Joint Learning of Generative Translator and Classifier for Visually Similar Classes

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce. For this purpose we propose joint learning from a scratch to train a classifier and a generative stochastic translation network end-to-end. The translation network is used to perform on-line data augmentation across classes whereas previous works have mostly involved domain adaptation. To help the model further benefit from this data-augmentation we introduce an adaptive fade-in loss and a quadruplet loss. We perform experiments on multiple datasets to demonstrate the proposed method's performance in varied settings. Of particular interest training on 40% of the dataset is enough for our model to surpass the performance of baselines trained on the full dataset. When our architecture is trained on the full dataset we achieve comparable performance with state-of-the-art methods despite using a light-weight architecture.

Cite

CITATION STYLE

APA

Yoo, B., Sylvain, T., Bengio, Y., & Kim, J. (2020). Joint Learning of Generative Translator and Classifier for Visually Similar Classes. IEEE Access, 8, 219160–219173. https://doi.org/10.1109/ACCESS.2020.3042302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free