Don't say that! making inconsistent dialogue unlikely with unlikelihood training

94Citations
Citations of this article
266Readers
Mendeley users who have this article in their library.

Abstract

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.

Cite

CITATION STYLE

APA

Li, M., Roller, S., Kulikov, I., Welleck, S., Boureau, Y. L., Cho, K., & Weston, J. (2020). Don’t say that! making inconsistent dialogue unlikely with unlikelihood training. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 4715–4728). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free