A Semi-automated Evaluation Metric for Dialogue Model Coherence

  • Gandhe S
  • Traum D
N/ACitations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a new metric, Voted Appropriateness, which can be used to automatically evaluate dialogue policy decisions, once some wizard data has been collected. We show that this metric outperforms a previously proposed metric Weak agreement. We also present a taxonomy for dialogue model evaluation schemas, and orient our new metric within this taxonomy.

Cite

CITATION STYLE

APA

Gandhe, S., & Traum, D. (2016). A Semi-automated Evaluation Metric for Dialogue Model Coherence (pp. 217–225). https://doi.org/10.1007/978-3-319-21834-2_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free