Investigating Multilingual Coreference Resolution by Universal Annotations

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilingual coreference resolution (MCR) has been a long-standing and challenging task. With the newly proposed multilingual coreference dataset, CorefUD (Nedoluzhko et al., 2022), we conduct an investigation into the task by using its harmonized universal morphosyntactic and coreference annotations. First, we study coreference by examining the ground truth data at different linguistic levels, namely mention, entity and document levels, and across different genres, to gain insights into the characteristics of coreference across multiple languages. Second, we perform an error analysis of the most challenging cases that the SotA system fails to resolve in the CRAC 2022 shared task using the universal annotations. Last, based on this analysis, we extract features from universal morphosyntactic annotations and integrate these features into a baseline system to assess their potential benefits for the MCR task. Our results show that our best configuration of features improves the baseline by 0.9% F1 score.

Cite

CITATION STYLE

APA

Chai, H., & Strube, M. (2023). Investigating Multilingual Coreference Resolution by Universal Annotations. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 10010–10024). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.671

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free