Self-supervised knowledge triplet learning for zero-shot question answering

47Citations
Citations of this article
165Readers
Mendeley users who have this article in their library.

Abstract

The aim of all Question Answering (QA) systems is to generalize to unseen questions. Current supervised methods are reliant on expensive data annotation. Moreover, such annotations can introduce unintended annotator bias, making systems focus more on the bias than the actual task. This work proposes Knowledge Triplet Learning (KTL), a self-supervised task over knowledge graphs. We propose heuristics to create synthetic graphs for commonsense and scientific knowledge. We propose using KTL to perform zero-shot question answering, and our experiments show considerable improvements over large pre-trained transformer language models.

Cite

CITATION STYLE

APA

Banerjee, P., & Baral, C. (2020). Self-supervised knowledge triplet learning for zero-shot question answering. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 151–162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free