Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e.g., “Nikkei gains” is generated when “Nikkei drops” is expected. We investigate models trained on contrastive examples, that is, incorrect sentences or terms, in addition to correct ones to reduce such errors. We first create rules to produce contrastive examples from correct ones by replacing frequent crucial terms such as “gain” or “drop”. We then use learning methods with several losses that exploit contrastive examples. Experiments on the market comment generation task show that 1) exploiting contrastive examples improves the capability to generate sentences with better lexical choices, without degrading the fluency, 2) the choice of the loss function is an important factor because the performances of different metrics depend on the types of loss functions, and 3) the use of the examples produced by some specific rules further improves performance. Human evaluation also supports the effectiveness of using contrastive examples.
CITATION STYLE
Uehara, Y., Ishigaki, T., Aoki, K., Goshima, K., Noji, H., Kobayashi, I., … Miyao, Y. (2020). Learning with Contrastive Examples for Data-to-Text Generation. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 2352–2362). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.213
Mendeley helps you to discover research relevant for your work.