Beyond Goldfish Memory*: Long-Term Open-Domain Conversation

64Citations
Citations of this article
203Readers
Mendeley users who have this article in their library.

Abstract

Despite recent improvements in open-domain dialogue models, state-of-the-art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a human-human dataset consisting of multiple chat sessions whereby the speaking partners learn about each other's interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state-of-the-art.

Cite

CITATION STYLE

APA

Xu, J., Szlam, A., & Weston, J. (2022). Beyond Goldfish Memory*: Long-Term Open-Domain Conversation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 5180–5197). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.356

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free