Diving Deep into Modes of Fact Hallucinations in Dialogue Systems

16Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Knowledge Graph(KG) grounded conversations often use large pre-trained models and usually suffer from fact hallucination. Frequently entities with no references in knowledge sources and conversation history are introduced into responses, thus hindering the flow of the conversation-existing work attempt to overcome this issue by tweaking the training procedure or using a multi-step refining method. However, minimal effort is put into constructing an entity-level hallucination detection system, which would provide fine-grained signals that control fallacious content while generating responses. As a first step to address this issue, we dive deep to identify various modes of hallucination in KG-grounded chatbots through human feedback analysis. Secondly, we propose a series of perturbation strategies to create a synthetic dataset named FADE (FActual Dialogue Hallucination DEtection Dataset). Finally, we conduct comprehensive data analyses and create multiple baseline models for hallucination detection to compare against human-verified data and already established benchmarks.

Cite

CITATION STYLE

APA

Das, S., Saha, S., & Srihari, R. K. (2022). Diving Deep into Modes of Fact Hallucinations in Dialogue Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 684–699). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free