Relation prediction in multilingual data based on multimodal relational topic models

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

There are increasing demands for improved analysis of multimodal data that consist of multiple representations, such as multilingual documents and text-Annotated images. One promising approach for analyzing such multimodal data is latent topic models. In this paper, we propose conditionally independent generalized relational topic models (CIgRTM) for predicting unknown relations across different multiple representations of multimodal data. We developed CI-gRTM as a multimodal extension of discriminative relational topic models called generalized relational topic models (gRTM). We demonstrated through experiments with multilingual documents that CI-gRTM can more effectively predict both multilingual representations and relations between two different language representations compared with several state-of-The-Art baseline models that enable to predict either multilingual representations or unimodal relations.

Cite

CITATION STYLE

APA

Sakata, Y., & Eguchi, K. (2017). Relation prediction in multilingual data based on multimodal relational topic models. IEICE Transactions on Information and Systems, E100D(4), 741–749. https://doi.org/10.1587/transinf.2016DAP0021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free