Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Interactive semantic parsing based on natural language (NL) feedback, where users provide feedback to correct the parser mistakes, has emerged as a more practical scenario than the traditional one-shot semantic parsing. However, prior work has heavily relied on human-annotated feedback data to train the interactive semantic parser, which is prohibitively expensive and not scalable. In this work, we propose a new task of simulating NL feedback for interactive semantic parsing. We accompany the task with a novel feedback evaluator. The evaluator is specifically designed to assess the quality of the simulated feedback, based on which we decide the best feedback simulator from our proposed variants. On a text-to-SQL dataset, we show that our feedback simulator can generate high-quality NL feedback to boost the error correction ability of a specific parser. In low-data settings, our feedback simulator can help achieve comparable error correction performance as trained using the costly, full set of human annotations.

Cite

CITATION STYLE

APA

Yan, H., Srivastava, S., Tai, Y., Wang, S. I., Yih, W. T., & Yao, Z. (2023). Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3149–3170). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.177

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free