Weakly-supervised semantic parsers are trained on utterance-denotation pairs, treating logical forms as latent. The task is challenging due to the large search space and spuriousness of logical forms. In this paper we introduce a neural parser-ranker system for weakly-supervised semantic parsing. The parser generates candidate tree-structured logical forms from utterances using clues of denotations. These candidates are then ranked based on two criterion: their likelihood of executing to the correct denotation, and their agreement with the utterance semantics. We present a scheduled training procedure to balance the contribution of the two objectives. Furthermore, we propose to use a neurally encoded lexicon to inject prior domain knowledge to the model. Experiments on three Freebase datasets demonstrate the effectiveness of our semantic parser, achieving results within the state-of-the-art range.
CITATION STYLE
Cheng, J., & Lapata, M. (2018). Weakly-supervised neural semantic parsing with a generative ranker. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 356–367). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1035
Mendeley helps you to discover research relevant for your work.