Extracting latent attributes from video scenes using text as background knowledge

0Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We explore the novel task of identifying latent attributes in video scenes, such as the mental states of actors, using only large text collections as background knowledge and minimal information about the videos, such as activity and actor types. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms. We develop and test several largely unsupervised information extraction models that identify the mental states of human participants in video scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models as well as other baseline methods.

Cite

CITATION STYLE

APA

Tran, A., Surdeanu, M., & Cohen, P. (2014). Extracting latent attributes from video scenes using text as background knowledge. In Proceedings of the 3rd Joint Conference on Lexical and Computational Semantics, *SEM 2014 (pp. 121–131). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/s14-1016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free