Detecting fine-grained cross-lingual semantic divergences without supervision by learning to rank

18Citations
Citations of this article
77Readers
Mendeley users who have this article in their library.

Abstract

Detecting fine-grained differences in content conveyed in different languages matters for cross-lingual NLP and multilingual corpora analysis, but it is a challenging machine learning problem since annotation is expensive and hard to scale. This work improves the prediction and annotation of fine-grained semantic divergences. We introduce a training strategy for multilingual BERT models by learning to rank synthetic divergent examples of varying granularity. We evaluate our models on the Rationalized English-French Semantic Divergences, a new dataset released with this work, consisting of English-French sentence-pairs annotated with semantic divergence classes and token-level rationales. Learning to rank helps detect fine-grained sentence-level divergences more accurately than a strong sentence-level similarity model, while token-level predictions have the potential of further distinguishing between coarse and fine-grained divergences.

Cite

CITATION STYLE

APA

Briakou, E., & Carpuat, M. (2020). Detecting fine-grained cross-lingual semantic divergences without supervision by learning to rank. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1563–1580). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.121

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free