A joint model for semantic sequences: Frames, entities, sentiments

11Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

Understanding stories – sequences of events – is a crucial yet challenging natural language understanding task. These events typically carry multiple aspects of semantics including actions, entities and emotions. Not only does each individual aspect contribute to the meaning of the story, so does the interaction among these aspects. Building on this intuition, we propose to jointly model important aspects of semantic knowledge – frames, entities and sentiments – via a semantic language model. We achieve this by first representing these aspects’ semantic units at an appropriate level of abstraction and then using the resulting vector representations for each semantic aspect to learn a joint representation via a neural language model. We show that the joint semantic language model is of high quality and can generate better semantic sequences than models that operate on the word level. We further demonstrate that our joint model can be applied to story cloze test and shallow discourse parsing tasks with improved performance and that each semantic aspect contributes to the model.

Cite

CITATION STYLE

APA

Peng, H., Chaturvedi, S., & Roth, D. (2017). A joint model for semantic sequences: Frames, entities, sentiments. In CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings (pp. 173–183). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k17-1019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free