Transductive auxiliary task self-training for neural multi-task models

1Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Multi-task learning and self-training are two common ways to improve a machine learning model's performance in settings with limited training data. Drawing heavily on ideas from those two approaches, we suggest transductive auxiliary task self-training: training a multi-task model on (i) a combination of main and auxiliary task training data, and (ii) test instances with auxiliary task labels which a single-task version of the model has previously generated. We perform extensive experiments on 86 combinations of languages and tasks. Our results are that, on average, transductive auxiliary task self-training improves absolute accuracy by up to 9.56% over the pure multitask model for dependency relation tagging and by up to 13.03% for semantic tagging.

Cite

CITATION STYLE

APA

Bjerva, J., Kann, K., & Augenstein, I. (2021). Transductive auxiliary task self-training for neural multi-task models. In DeepLo@EMNLP-IJCNLP 2019 - Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource Natural Language Processing - Proceedings (pp. 253–258). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-6128

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free