Syntax-Informed Question Answering with Heterogeneous Graph Transformer

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large neural language models are steadily contributing state-of-the-art performance to question answering and other natural language and information processing tasks. These models are expensive to train. We propose to evaluate whether such pre-trained models can benefit from the addition of explicit linguistics information without requiring retraining from scratch. We present a linguistics-informed question answering approach that extends and fine-tunes a pre-trained transformer-based neural language model with symbolic knowledge encoded with a heterogeneous graph transformer. We illustrate the approach by the addition of syntactic information in the form of dependency and constituency graphic structures connecting tokens and virtual vertices. A comparative empirical performance evaluation with BERT as its baseline and with Stanford Question Answering Dataset demonstrates the competitiveness of the proposed approach. We argue, in conclusion and in the light of further results of preliminary experiments, that the approach is extensible to further linguistics information including semantics and pragmatics.

Cite

CITATION STYLE

APA

Zhu, F., Tan, L. Y., Ng, S. K., & Bressan, S. (2022). Syntax-Informed Question Answering with Heterogeneous Graph Transformer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13426 LNCS, pp. 17–31). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-12423-5_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free