Deterministic statistical mapping of sentences to underspecified semantics

7Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a method for training a statistical model for mapping natural language sentences to semantic expressions. The semantics are expressions of an underspecified logical form that has properties making it particularly suitable for statistical mapping from text. An encoding of the semantic expressions into dependency trees with automatically generated labels allows application of existing methods for statistical dependency parsing to the mapping task (without the need for separate traditional dependency labels or parts of speech). The encoding also results in a natural per-word semantic-mapping accuracy measure. We report on the results of training and testing statistical models for mapping sentences of the Penn Treebank into the semantic expressions, for which per-word semantic mapping accuracy ranges between 79% and 86% depending on the experimental conditions. The particular choice of algorithms used also means that our trained mapping is deterministic (in the sense of deterministic parsing), paving the way for large-scale text-to-semantic mapping.

Cite

CITATION STYLE

APA

Alshawi, H., Chang, P. C., & Ringgaard, M. (2011). Deterministic statistical mapping of sentences to underspecified semantics. In Proceedings of the 9th International Conference on Computational Semantics, IWCS 2011 (pp. 15–24). Association for Computational Linguistics, ACL Anthology. https://doi.org/10.1007/978-94-007-7284-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free