Seq2seq is All You Need for Coreference Resolution

10Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Existing works on coreference resolution suggest that task-specific models are necessary to achieve state-of-the-art performance. In this work, we present compelling evidence that such models are not necessary. We finetune a pretrained seq2seq transformer to map an input document to a tagged sequence encoding the coreference annotation. Despite the extreme simplicity, our model outperforms or closely matches the best coreference systems in the literature on an array of datasets. We also propose an especially simple seq2seq approach that generates only tagged spans rather than the spans interleaved with the original text. Our analysis shows that the model size, the amount of supervision, and the choice of sequence representations are key factors in performance.

Cite

CITATION STYLE

APA

Zhang, W., Wiseman, S., & Stratos, K. (2023). Seq2seq is All You Need for Coreference Resolution. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 11493–11504). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.704

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free