On Contrasting YAGO with GPT-J: An Experiment for Person-Related Attributes

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Language models (LMs) trained or large text corpora have demonstrated their superior performance in different language related tasks in the last years. These models automatically implicitly incorporate factual knowledge that can be used to complement existing Knowledge Graphs (KGs) that in most cases are structured from human curated databases. Here we report an experiment that attempts to gain insights about the extent to which LMs can generate factual information as that present in KGs. Concretely, we have tested such process using the English Wikipedia subset of YAGO and the GPT-J model for attributes related to individuals. Results show that the generation of correct factual information depends on the generation parameters of the model and are unevenly balanced across diverse individuals. Further, the LM can be used to populate further factual information, but it requires intermediate parsing to correctly map to KG attributes.

Cite

CITATION STYLE

APA

Martin-Moncunill, D., Sicilia, M. A., González, L., & Rodríguez, D. (2022). On Contrasting YAGO with GPT-J: An Experiment for Person-Related Attributes. In Communications in Computer and Information Science (Vol. 1686 CCIS, pp. 234–245). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-21422-6_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free