Abstract
Semantic parsing maps natural language questions into logical forms, which can be executed against a knowledge base for answers. In real-world applications, the performance of a parser is often limited by the lack of training data. To facilitate zero-shot learn-ing, data synthesis has been widely studied to automatically generate paired questions and logical forms. However, data synthesis methods can hardly cover the diverse structures in natural languages, leading to a large gap in sentence structure between synthetic and natural questions. In this paper, we propose a decomposition-based method to unify the sentence structures of questions, which benefits the generalization to natural questions. Experiments demonstrate that our method sig-nificantly improves the semantic parser trained on synthetic data (+7.9% on KQA and +8.9% on ComplexWebQuestions in terms of exact match accuracy). Extensive analysis demon-strates that our method can better generalize to natural questions with novel text expressions compared with baselines. Besides semantic parsing, our idea potentially benefits other semantic understanding tasks by mitigating the distracting structure features. To illustrate this, we extend our method to the task of sentence embedding learning, and observe substantial improvements on sentence retrieval (+13.1% for Hit@1).
Cite
CITATION STYLE
Niu, Y., Huang, F., Liu, W., Cui, J., Wang, B., & Huang, M. (2023). Bridging the Gap between Synthetic and Natural Questions via Sentence Decomposition for Semantic Parsing. Transactions of the Association for Computational Linguistics, 11, 367–383. https://doi.org/10.1162/tacl_a_00552
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.