Meta-learning for effective multi-task and multilingual modelling

10Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural language processing (NLP) tasks (e.g. question-answering in English) benefit from knowledge of other tasks (e.g., named entity recognition in English) and knowledge of other languages (e.g., question-answering in Spanish). Such shared representations are typically learned in isolation, either across tasks or across languages. In this work, we propose a meta-learning approach to learn the interactions between both tasks and languages. We also investigate the role of different sampling strategies used during meta-learning. We present experiments on five different tasks and six different languages from the XTREME multilingual benchmark dataset (Hu et al., 2020). Our meta-learned model clearly improves in performance compared to competitive baseline models that also include multitask baselines. We also present zero-shot evaluations on unseen target languages to demonstrate the utility of our proposed model.

Cite

CITATION STYLE

APA

Tarunesh, I., Khyalia, S., Kumar, V., Ramakrishnan, G., & Jyothi, P. (2021). Meta-learning for effective multi-task and multilingual modelling. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3600–3612). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.314

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free