Learning semantic correspondences with less supervision

250Citations
Citations of this article
243Readers
Mendeley users who have this article in their library.

Abstract

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty-Robocup sportscasting, weather forecasts (a new domain), and NFL recaps. © 2009 ACL and AFNLP.

Cite

CITATION STYLE

APA

Liang, P., Jordan, M. I., & Klein, D. (2009). Learning semantic correspondences with less supervision. In ACL-IJCNLP 2009 - Joint Conf. of the 47th Annual Meeting of the Association for Computational Linguistics and 4th Int. Joint Conf. on Natural Language Processing of the AFNLP, Proceedings of the Conf. (pp. 91–99). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1687878.1687893

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free