Complex question answering on knowledge graphs using machine translation and multi-task learning

13Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Question answering (QA) over a knowledge graph (KG) is a task of answering a natural language (NL) query using the information stored in KG. In a real-world industrial setting, this involves addressing multiple challenges including entity linking, multi-hop reasoning over KG, etc. Traditional approaches handle these challenges in a modularized sequential manner where errors in one module lead to the accumulation of errors in downstream modules. Often these challenges are inter-related and the solutions to them can reinforce each other when handled simultaneously in an end-to-end learning setup. To this end, we propose a multi-task BERT based Neural Machine Translation (NMT) model to address these challenges. Through experimental analysis, we demonstrate the efficacy of our proposed approach on one publicly available and one proprietary dataset.

Cite

CITATION STYLE

APA

Srivastava, S., Patidar, M., Chowdhury, S., Agarwal, P., Bhattacharya, I., & Shroff, G. (2021). Complex question answering on knowledge graphs using machine translation and multi-task learning. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3428–3429). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.300

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free