Same representation, different attentions: Shareable sentence representation learning from multiple tasks

19Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Distributed representation plays an important role in deep learning based natural language processing. However, the representation of a sentence often varies in different tasks, which is usually learned from scratch and suffers from the limited amounts of training data. In this paper, we claim that a good sentence representation should be invariant and can benefit the various subsequent tasks. To achieve this purpose, we propose a new scheme of information sharing for multi-task learning. More specifically, all tasks share the same sentence representation and each task can select the task-specific information from the shared sentence representation with attention mechanisms. The query vector of each task's attention could be either static parameters or generated dynamically. We conduct extensive experiments on 16 different text classification tasks, which demonstrate the benefits of our architecture. Source codes of this paper are available on Github.

Cite

CITATION STYLE

APA

Zheng, R., Chen, J., & Qiu, X. (2018). Same representation, different attentions: Shareable sentence representation learning from multiple tasks. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 4616–4622). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/642

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free