Fine-grained Entity Typing without Knowledge Base

7Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Existing work on Fine-grained Entity Typing (FET) typically trains automatic models on the datasets obtained by using Knowledge Bases (KB) as distant supervision. However, the reliance on KB means this training setting can be hampered by the lack of or the incompleteness of the KB. To alleviate this limitation, we propose a novel setting for training FET models: FET without accessing any knowledge base. Under this setting, we propose a two-step framework to train FET models. In the first step, we automatically create pseudo data with fine-grained labels from a large unlabeled dataset. Then a neural network model is trained based on the pseudo data, either in an unsupervised way or using self-training under the weak guidance from a coarse-grained Named Entity Recognition (NER) model. Experimental results show that our method achieves competitive performance with respect to the models trained on the original KB-supervised datasets.

Cite

CITATION STYLE

APA

Qian, J., Liu, Y., Liu, L., Li, Y., Jiang, H., Zhang, H., & Shi, S. (2021). Fine-grained Entity Typing without Knowledge Base. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5309–5319). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.431

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free