Unanimous prediction for 100% precision with application to learning semantic mappings

18Citations
Citations of this article
125Readers
Mendeley users who have this article in their library.

Abstract

Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is wellspecified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset.

Cite

CITATION STYLE

APA

Khani, F., Rinard, M., & Liang, P. (2016). Unanimous prediction for 100% precision with application to learning semantic mappings. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers (Vol. 2, pp. 952–962). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-1090

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free