Cross-modal language generation using pivot stabilization for web-scale language coverage

9Citations
Citations of this article
108Readers
Mendeley users who have this article in their library.

Abstract

Cross-modal language generation tasks such as image captioning are directly hurt in their ability to support non-English languages by the trend of data-hungry models combined with the lack of non-English annotations. We investigate potential solutions for combining existing language-generation annotations in English with translation capabilities in order to create solutions at web-scale in both domain and language coverage. We describe an approach called Pivot-Language Generation Stabilization (PLuGS), which leverages directly at training time both existing English annotations (gold data) as well as their machine-translated versions (silver data); at run-time, it generates first an English caption and then a corresponding target-language caption. We show that PLuGS models outperform other candidate solutions in evaluations performed over 5 different target languages, under a large-domain testset using images from the Open Images dataset. Furthermore, we find an interesting effect where the English captions generated by the PLuGS models are better than the captions generated by the original, monolingual English model.

Cite

CITATION STYLE

APA

Thapliyal, A. V., & Soricut, R. (2020). Cross-modal language generation using pivot stabilization for web-scale language coverage. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 160–170). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free