Supervised seeded iterated learning for interactive language learning

2Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

Language drift has been one of the major obstacles to train language models through interaction. When word-based conversational agents are trained towards completing a task, they tend to invent their language rather than leveraging natural language. In recent literature, two general methods partially counter this phenomenon: Supervised Selfplay (S2P) and Seeded Iterated Learning (SIL). While S2P jointly trains interactive and supervised losses to counter the drift, SIL changes the training dynamics to prevent language drift from occurring. In this paper, we first highlight their respective weaknesses, i.e., late-stage training collapses and higher negative likelihood when evaluated on human corpus. Given these observations, we introduce Supervised Seeded Iterated Learning (SSIL) to combine both methods to minimize their respective weaknesses. We then show the effectiveness of SSIL in the language-drift translation game.

Cite

CITATION STYLE

APA

Lu, Y., Singhal, S., Strub, F., Pietquin, O., & Courville, A. (2020). Supervised seeded iterated learning for interactive language learning. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 3962–3970). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free