How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?

3Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multilingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. With a series of experiments, we refute the commonsense that the more source the better, and propose the Similarity Hypothesis for CLET.

Cite

CITATION STYLE

APA

Jin, H., Dong, T., Hou, L., Li, J., Chen, H., Dai, Z., & Yincen, Q. (2022). How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3071–3081). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.243

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free