Weakly-supervised neural semantic parsing with a generative ranker

10Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.

Abstract

Weakly-supervised semantic parsers are trained on utterance-denotation pairs, treating logical forms as latent. The task is challenging due to the large search space and spuriousness of logical forms. In this paper we introduce a neural parser-ranker system for weakly-supervised semantic parsing. The parser generates candidate tree-structured logical forms from utterances using clues of denotations. These candidates are then ranked based on two criterion: their likelihood of executing to the correct denotation, and their agreement with the utterance semantics. We present a scheduled training procedure to balance the contribution of the two objectives. Furthermore, we propose to use a neurally encoded lexicon to inject prior domain knowledge to the model. Experiments on three Freebase datasets demonstrate the effectiveness of our semantic parser, achieving results within the state-of-the-art range.

Cite

CITATION STYLE

APA

Cheng, J., & Lapata, M. (2018). Weakly-supervised neural semantic parsing with a generative ranker. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 356–367). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free