Extending long short-term memory for multi-view structured learning

85Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Long Short-Term Memory (LSTM) networks have been successfully applied to a number of sequence learning problems but they lack the design flexibility to model multiple view interactions, limiting their ability to exploit multi-view relationships. In this paper, we propose a Multi-View LSTM (MV-LSTM), which explicitly models the view-specific and cross-view interactions over time or structured outputs. We evaluate the MV-LSTM model on four publicly available datasets spanning two very different structured learning problems: multimodal behaviour recognition and image captioning. The experimental results show competitive performance on all four datasets when compared with state-of-the-art models.

Cite

CITATION STYLE

APA

Rajagopalan, S. S., Morency, L. P., Baltrušaitis, T., & Goecke, R. (2016). Extending long short-term memory for multi-view structured learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9911 LNCS, pp. 338–353). Springer Verlag. https://doi.org/10.1007/978-3-319-46478-7_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free