The objective of anaphora resolution in dialogue shared-task is to go above and beyond the simple cases of coreference resolution in written text on which NLP has mostly focused so far, which arguably overestimate the performance of current SOTA models. The anaphora resolution in dialogue shared-task consists of three subtasks; subtask1, resolution of anaphoric identity and non-referring expression identification, subtask2, resolution of bridging references, and subtask3, resolution of discourse deixis/abstract anaphora. In this paper, we propose the pipelined model (i.e., a resolution of anaphoric identity and a resolution of bridging references) for the subtask1 and the subtask2. In the subtask1, our model detects mention via the parentheses prediction. Then, we create mention representation using the token representation constituting the mention. Mention representation is fed to the coreference resolution model for clustering. In the subtask2, our model resolves bridging references via a MRC framework. We construct a query for each entity with “What is related of ENTITY?”. The input of our model is query and documents(i.e., all utterances of dialogue). Then, our model predicts entity span that is answer for a query.
CITATION STYLE
Kim, H., Kim, D., & Kim, H. (2021). The Pipeline Model for Resolution of Anaphoric Reference and Resolution of Entity Reference. In CODI-CRAC 2021 - CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue, Proceedings of the Workshop (pp. 43–47). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.codi-sharedtask.4
Mendeley helps you to discover research relevant for your work.