Character Grounding and Re-identification in Story of Videos and Text Descriptions

4Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We address character grounding and re-identification in multiple story-based videos like movies and associated text descriptions. In order to solve these related tasks in a mutually rewarding way, we propose a model named Character in Story Identification Network (CiSIN). Our method builds two semantically informative representations via joint training of multiple objectives for character grounding, video/text re-identification and gender prediction: Visual Track Embedding from videos and Textual Character Embedding from text context. These two representations are learned to retain rich semantic multimodal information that enables even simple MLPs to achieve the state-of-the-art performance on the target tasks. More specifically, our CiSIN model achieves the best performance in the Fill-in the Characters task of LSMDC 2019 challenges[35]. Moreover, it outperforms previous state-of-the-art models in M-VAD Names dataset [30] as a benchmark of multimodal character grounding and re-identification.

Cite

CITATION STYLE

APA

Yu, Y., Kim, J., Yun, H., Chung, J., & Kim, G. (2020). Character Grounding and Re-identification in Story of Videos and Text Descriptions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12350 LNCS, pp. 543–559). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58558-7_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free