Learning to map frequent phrases to sub-structures of meaning representation for neural semantic parsing

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Neural semantic parsers usually generate meaning representation tokens from natural language tokens via an encoder-decoder model. However, there is often a vocabulary-mismatch problem between natural language utterances and logical forms. That is, one word maps to several atomic logical tokens, which need to be handled as a whole, rather than individual logical tokens at multiple steps. In this paper, we propose that the vocabulary-mismatch problem can be effectively resolved by leveraging appropriate logical tokens. Specifically, we exploit macro actions, which are of the same granularity of words/phrases, and allow the model to learn mappings from frequent phrases to corresponding substructures of meaning representation. Furthermore, macro actions are compact, and therefore utilizing them can significantly reduce the search space, which brings a great benefit to weakly supervised semantic parsing. Experiments show that our method leads to substantial performance improvement on three benchmarks, in both supervised and weakly supervised settings.

Cite

CITATION STYLE

APA

Chen, B., Han, X., He, B., & Sun, L. (2020). Learning to map frequent phrases to sub-structures of meaning representation for neural semantic parsing. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 7546–7553). AAAI press. https://doi.org/10.1609/aaai.v34i05.6253

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free