Knowledge Base Completion for Long-Tail Entities

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite their impressive scale, knowledge bases (KBs), such as Wikidata, still contain significant gaps. Language models (LMs) have been proposed as a source for filling these gaps. However, prior works have focused on prominent entities with rich coverage by LMs, neglecting the crucial case of long-tail entities. In this paper, we present a novel method for LM-based-KB completion that is specifically geared for facts about long-tail entities. The method leverages two different LMs in two stages: for candidate retrieval and for candidate verification and disambiguation. To evaluate our method and various baselines, we introduce a novel dataset, called MALT, rooted in Wikidata. Our method outperforms all baselines in F1, with major gains especially in recall.

Cite

CITATION STYLE

APA

Chen, L., Razniewski, S., & Weikum, G. (2023). Knowledge Base Completion for Long-Tail Entities. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 99–108). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.matching-1.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free