Abstract
Weakly supervised semantic parsing (WSP) aims at training a parser via utterancedenotation pairs. This task is challenging because it requires (1) searching consistent logical forms in a huge space; and (2) dealing with spurious logical forms. In this work, we propose Learning from Mistakes (LFM), a simple yet effective learning framework for WSP. LFM utilizes the mistakes made by a parser during searching, i.e., generating logical forms that do not execute to correct denotations, for tackling the two challenges. In a nutshell, LFM additionally trains a parser using utterance-logical form pairs created from mistakes, which can quickly bootstrap the parser to search consistent logical forms. Also, it can motivate the parser to learn the correct mapping between utterances and logical forms, thus dealing with the spuriousness of logical forms. We evaluate LFM on WikiTableQuestions, WikiSQL, and TabFact in the WSP setting. The parser trained with LFM outperforms the previous state-of-the-art semantic parsing approaches on the three datasets. Also, we find that LFM can substantially reduce the need for labeled data. Using only 10% of utterancedenotation pairs, the parser achieves 84.2 denotation accuracy on WikiSQL, which is competitive with the previous state-of-the-art approaches using 100% labeled data.
Cite
CITATION STYLE
Guo, J., Lou, J. G., Liu, T., & Zhang, D. (2021). Weakly Supervised Semantic Parsing by Learning from Mistakes. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 2603–2617). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.222
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.