Fundamental Exploration of Evaluation Metrics for Persona Characteristics of Text Utterances

0Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

To maintain utterance quality of a personaaware dialog system, inappropriate utterances for the persona should be thoroughly filtered. When evaluating the appropriateness of a large number of arbitrary utterances to be registered in the utterance database of a retrieval-based dialog system, evaluation metrics that require a reference (or a "correct"utterance) for each evaluation target cannot be used. In addition, practical utterance filtering requires the ability to select utterances based on the intensity of persona characteristics. Therefore, we are developing metrics that can be used to capture the intensity of persona characteristics and can be computed without references tailored to the evaluation targets. To this end, we explore existing metrics and propose two new metrics: persona speaker probability and persona term salience. Experimental results show that our proposed metrics show weak to moderate correlations between scores of persona characteristics based on human judgments and outperform other metrics overall in filtering inappropriate utterances for particular personas.

Cite

CITATION STYLE

APA

Miyazaki, C., Kanno, S., Yoda, M., Ono, J., & Wakaki, H. (2021). Fundamental Exploration of Evaluation Metrics for Persona Characteristics of Text Utterances. In SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 178–189). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.sigdial-1.19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free