The largest store of continually updating knowledge on our planet can be accessed via internet search. In this work we study giving access to this information to conversational agents. Large language models, even though they store an impressive amount of knowledge within their weights, are known to hallucinate facts when generating dialogue (Shuster et al., 2021); moreover, those facts are frozen in time at the point of model training. In contrast, we propose an approach that learns to generate an internet search query based on the context, and then conditions on the search results to finally generate a response, a method that can employ up-to-the-minute relevant information. We train and evaluate such models on a newly collected dataset of human-human conversations whereby one of the speakers is given access to internet search during knowledge-driven discussions in order to ground their responses. We find that search-query based access of the internet in conversation provides superior performance compared to existing approaches that either use no augmentation or FAISS-based retrieval (Lewis et al., 2020b).
CITATION STYLE
Komeili, M., Shuster, K., & Weston, J. (2022). Internet-Augmented Dialogue Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8460–8478). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.579
Mendeley helps you to discover research relevant for your work.