Learning latent semantic annotations for grounding natural language to structured data

10Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms. In this paper, we attempt at learning explicit latent semantic annotations from paired structured tables and texts, establishing correspondences between various types of values and texts. We model the joint probability of data fields, texts, phrasal spans, and latent annotations with an adapted semi-hidden Markov model, and impose a soft statistical constraint to further improve the performance. As a by-product, we leverage the induced annotations to extract templates for language generation. Experimental results suggest the feasibility of the setting in this study, as well as the effectiveness of our proposed framework. 1

Cite

CITATION STYLE

APA

Qin, G., Yao, J. G., Wang, X., Wang, J., & Lin, C. Y. (2018). Learning latent semantic annotations for grounding natural language to structured data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 3761–3771). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1411

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free