Keeping consistency of sentence generation and document classification with multi-task learning

9Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

The automated generation of information indicating the characteristics of articles such as headlines, key phrases, summaries and categories helps writers to alleviate their workload. Previous research has tackled these tasks using neural abstractive summarization and classification methods. However, the outputs may be inconsistent if they are generated individually. The purpose of our study is to generate multiple outputs consistently. We introduce a multi-task learning model with a shared encoder and multiple decoders for each task. We propose a novel loss function called hierarchical consistency loss to maintain consistency among the attention weights of the decoders. To evaluate the consistency, we employ a human evaluation. The results show that our model generates more consistent headlines, key phrases and categories. In addition, our model outperforms the baseline model on the ROUGE scores, and generates more adequate and fluent headlines.

Cite

CITATION STYLE

APA

Nishino, T., Misawa, S., Kano, R., Taniguchi, T., Miura, Y., & Ohkuma, T. (2019). Keeping consistency of sentence generation and document classification with multi-task learning. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 3195–3205). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free