Neural text generation (data- or text-to-text) demonstrates remarkable performance when training data is abundant which for many applications is not the case. To collect a large corpus of parallel data, heuristic rules are often used but they inevitably let noise into the data, such as phrases in the output which cannot be explained by the input. Consequently, models pick up on the noise and may hallucinate–generate fluent but unsupported text. Our contribution is a simple but powerful technique to treat such hallucinations as a controllable aspect of the generated text, without dismissing any input and without modifying the model architecture. On the WikiBio corpus (Lebret et al., 2016), a particularly noisy dataset, we demonstrate the efficacy of the technique both in an automatic and in a human evaluation.
CITATION STYLE
Fillippova, K. (2020). Controlled hallucinations: learning to generate faithfully from noisy data. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 864–870). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.76
Mendeley helps you to discover research relevant for your work.