On The Ingredients of an Effective Zero-shot Semantic Parser

10Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

Abstract

Semantic parsers map natural language utterances into meaning representations (e.g. programs). Such models are typically bottlenecked by the paucity of training data due to the laborious annotation efforts. Recent studies have performed zero-shot learning by synthesizing training examples of canonical utterances and programs from a grammar, and further paraphrasing these utterances to improve linguistic diversity. However, such synthetic examples cannot fully capture patterns in real data. In this paper we analyze zero-shot parsers through the lenses of the language and logical gaps (Herzig and Berant, 2019), which quantify the discrepancy of language and programmatic patterns between the synthetic canonical examples and real-world user-issued ones. We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods using canonical examples that most likely reflect real user intents. Our model achieves strong results on the SCHOLAR and GEO benchmarks with zero labeled data.

Cite

CITATION STYLE

APA

Yin, P., Wieting, J., Sil, A., & Neubig, G. (2022). On The Ingredients of an Effective Zero-shot Semantic Parser. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1455–1474). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free