Transfer learning using progressive neural networks and NMT for classification tasks in NLP

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently neural networks are obtaining state of the art results on many NLP tasks like sentiment classification, machine translation, etc. However one of the drawbacks of these techniques is that they need large amounts of training data. Even though there is a lot of data being generated everyday, not all tasks have large amounts of data. One possible solution when data is not sufficient is using transfer learning techniques. In this paper, we explored methods of transfer learning (or sharing the parameters) between different tasks so that the performance on the low data resource tasks is improved. We have first tried to replicate the prior results of transfer learning in semantically related tasks. When we have semantically different tasks, we tried using Progressive Neural Networks. We also experimented on sharing the encoder from neural machine translator to classification tasks.

Cite

CITATION STYLE

APA

Devanapalli, R. S., & Devi, V. S. (2018). Transfer learning using progressive neural networks and NMT for classification tasks in NLP. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11303 LNCS, pp. 188–197). Springer Verlag. https://doi.org/10.1007/978-3-030-04182-3_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free