Does the objective matter? Comparing training objectives for pronoun resolution

1Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

Hard cases of pronoun resolution have been used as a long-standing benchmark for commonsense reasoning. In the recent literature, pre-trained language models have been used to obtain state-of-the-art results on pronoun resolution. Overall, four categories of training and evaluation objectives have been introduced. The variety of training datasets and pretrained language models used in these works makes it unclear whether the choice of training objective is critical. In this work, we make a fair comparison of the performance and seed-wise stability of four models that represent the four categories of objectives. Our experiments show that the objective of sequence ranking performs the best in-domain, while the objective of semantic similarity between candidates and pronoun performs the best out-of-domain. We also observe a seed-wise instability of the model using sequence ranking, which is not the case when the other objectives are used.

Cite

CITATION STYLE

APA

Yordanov, Y., Camburu, O. M., Kocijan, V., & Lukasiewicz, T. (2020). Does the objective matter? Comparing training objectives for pronoun resolution. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 4963–4969). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.402

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free