Annotating objects and relations in user-generated videos

119Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Understanding the objects and relations between them is indispensable to fine-grained video content analysis, which is widely studied in recent research works in multimedia and computer vision. However, existing works are limited to evaluating with either small datasets or indirect metrics, such as the performance over images. The underlying reason is that the construction of a large-scale video dataset with dense annotation is tricky and costly. In this paper, we address several main issues in annotating objects and relations in user-generated videos, and propose an annotation pipeline that can be executed at a modest cost. As a result, we present a new dataset, named VidOR, consisting of 10k videos (84 hours) together with dense annotations that localize 80 categories of objects and 50 categories of predicates in each video. We have made the training and validation set public and extendable for more tasks to facilitate future research on video object and relation recognition.

Cite

CITATION STYLE

APA

Shang, X., Di, D., Xiao, J., Cao, Y., Yang, X., & Chua, T. S. (2019). Annotating objects and relations in user-generated videos. In ICMR 2019 - Proceedings of the 2019 ACM International Conference on Multimedia Retrieval (pp. 279–287). Association for Computing Machinery, Inc. https://doi.org/10.1145/3323873.3325056

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free