What did you refer to? Evaluating Co-references in Dialogue

2Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing neural end-to-end dialogue models have limitations on exactly interpreting the linguistic structures, such as ellipsis, anaphor and co-reference, etc., in dialogue history context. Therefore, it is hard to determine whether the dialogue models truly understand a dialogue or not, only depending on the coherence evaluation of their generated responses. To address these issues, in this paper, we proposed to directly measure the capability of dialogue models on understanding the entity-oriented structures via question answering and construct a new benchmark dataset, DEQA, including large-scale English and Chinese human-human dialogues. Experiments carried on representative dialogue models show that these models all face challenges on the proposed dialogue understanding task. The DEQA dataset will release for research use.

Cite

CITATION STYLE

APA

Zhang, W., Zhang, Y., Tang, H., Zhao, Z., Zhu, C., & Liu, T. (2021). What did you refer to? Evaluating Co-references in Dialogue. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 5075–5084). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.450

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free