Learning from audience intelligence: Dynamic labeled LDA model for time-sync commented video tagging

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the boom of online video uploading, video tagging becomes an important way for video indexing. However, text-based video tagging methods ignore either genre labels or temporal differences of videos, which makes results defective. Fortunately, a new type of videos called time-sync commented videos which contains large amounts of information commented by the users helps videos tagging. In this paper, we propose a supervised dynamic Latent Dirichlet Allocation model utilizing the variational topics of time-sync comments to extract both genre labels and keywords as tags. We also implement experiments on large scale real-world datasets and the effectiveness of our model are proved both in genre label classification and keyword extraction compared with baseline models.

Cite

CITATION STYLE

APA

Zeng, Z., Xue, C., Gao, N., Wang, L., & Liu, Z. (2018). Learning from audience intelligence: Dynamic labeled LDA model for time-sync commented video tagging. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11303 LNCS, pp. 546–559). Springer Verlag. https://doi.org/10.1007/978-3-030-04182-3_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free