Evaluation Guidelines to Deal with Implicit Phenomena to Assess Factuality in Data-to-Text Generation

0Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data-to-text generation systems are trained on large datasets, such as WebNLG, RotoWire, E2E or DART. Beyond traditional token-overlap evaluation metrics (BLEU or METEOR), a key concern faced by recent generators is to control the factuality of the generated text with respect to the input data specification. We report on our experience when developing an automatic factuality evaluation system for data-to-text generation that we are testing on WebNLG and E2E data. We aim to prepare gold data annotated manually to identify cases where the text communicates more information than is warranted based on the input data (extra) or fails to communicate data that is part of the input (missing). While analyzing reference (data, text) samples, we encountered a range of systematic uncertainties that are related to cases on implicit phenomena in text, and the nature of non-linguistic knowledge we expect to be involved when assessing factuality. We derive from our experience a set of evaluation guidelines to reach high inter-annotator agreement on such cases.

Cite

CITATION STYLE

APA

Eisenstadt, R., & Elhadad, M. (2021). Evaluation Guidelines to Deal with Implicit Phenomena to Assess Factuality in Data-to-Text Generation. In UNIMPLICIT 2021 - 1st Workshop on Understanding Implicit and Underspecified Language, Proceedings of the Workshop (pp. 20–27). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.unimplicit-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free