Combining cross-lingual and cross-task supervision for zero-shot learning

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work we combine cross-lingual and cross-task supervision for zero-shot learning. Our main contribution is that we discovered that coupling models, i.e. models that share neither a task nor a language with the zero-shot target model, can improve the results significantly. Coupling models serve as a regularization for the other auxiliary models that provide direct cross-lingual and cross-task supervision. We conducted a series of experiments with four Indo-European languages and four tasks (dependency parsing, language modeling, named entity recognition and part-of-speech tagging) in various settings. We were able to achieve 32% error reduction compared to using cross-lingual supervision only.

Cite

CITATION STYLE

APA

Pikuliak, M., & Šimko, M. (2020). Combining cross-lingual and cross-task supervision for zero-shot learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12284 LNAI, pp. 162–170). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58323-1_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free