Deep Knowledge Graph Representation Learning for Completion, Alignment, and Question Answering

8Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A knowledge graph (KG) has nodes and edges representing entities and relations. KGs are central to search and question answering (QA), yet research on deep/neural representation of KGs, as well as deep QA, have moved largely to AI, ML and NLP communities. The goal of this tutorial is to give IR researchers a thorough update on the best practices of neural KG representation and inference from AI, ML and NLP communities, and then explore how KG representation research in the IR community can be better driven by the needs of search, passage retrieval, and QA. In this tutorial, we will study the most widely-used public KGs, important properties of their relations, types and entities, best-practice deep representations of KG elements and how they support or cannot support such properties, loss formulations and learning methods for KG completion and inference, the representation of time in temporal KGs, alignment across multiple KGs, possibly in different languages, and the use and benefits of deep KG representations in QA applications.

Cite

CITATION STYLE

APA

Chakrabarti, S. (2022). Deep Knowledge Graph Representation Learning for Completion, Alignment, and Question Answering. In SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 3451–3454). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3532679

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free