Abstract
The differences in decision making between behavioural models of voice interfaces are hard to capture using existing measures for the absolute performance of such models. For instance, two models may have a similar task success rate, but very different ways of getting there. In this paper, we propose a general methodology to compute the similarity of two dialogue behaviour models and investigate different ways of computing scores on both the semantic and the textual level. Complementing absolute measures of performance, we test our scores on three different tasks and show the practical usability of the measures.
Cite
CITATION STYLE
Ultes, S., & Maier, W. (2020). Similarity scoring for dialogue behaviour comparison. In SIGDIAL 2020 - 21st Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 311–322). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.sigdial-1.38
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.