DCA: Diversified Co-attention Towards Informative Live Video Commenting

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We focus on the task of Automatic Live Video Commenting (ALVC), which aims to generate real-time video comments with both video frames and other viewers’ comments as inputs. A major challenge in this task is how to properly leverage the rich and diverse information carried by video and text. In this paper, we aim to collect diversified information from video and text for informative comment generation. To achieve this, we propose a Diversified Co-Attention (DCA) model for this task. Our model builds bidirectional interactions between video frames and surrounding comments from multiple perspectives via metric learning, to collect a diversified and informative context for comment generation. We also propose an effective parameter orthogonalization technique to avoid excessive overlap of information learned from different perspectives. Results show that our approach outperforms existing methods in the ALVC task, achieving new state-of-the-art results.

Cite

CITATION STYLE

APA

Zhang, Z., Yin, Z., Ren, S., Li, X., & Li, S. (2020). DCA: Diversified Co-attention Towards Informative Live Video Commenting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12431 LNAI, pp. 3–15). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60457-8_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free