Learning to compare for better training and evaluation of open domain natural language generation models

24Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-Tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-Tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chitchat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.

Cite

CITATION STYLE

APA

Zhou, W., & Xu, K. (2020). Learning to compare for better training and evaluation of open domain natural language generation models. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9717–9724). AAAI press. https://doi.org/10.1609/aaai.v34i05.6521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free