A comparative study of video annotation tools for scene understanding: Yet (not) another annotation tool

3Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Computers are powerful tools capable of solving a great variety of ever so complex problems, yet training them to interpret even the simplest video scenes can prove more challenging than one might imagine. Still being one of the major problems in computer vision, this issue recently is addressed by utilizing promising deep learning approaches in order to recognize objects and their semantics. For achieving this goal, huge artificial networks are fed with many human-created annotations using more or less sophisticated tools for speeding up the otherwise time-consuming task of manual annotation. Purposefully refraining from designing yet another of these annotation tools, in this work we strive for evaluating what makes existing ones great or not, i.e. we aim at determining effectiveness and efficiency of state-of-the-art object annotation tools when employed for annotating different kinds of video content. Our findings in a user study evaluating three comparable tools on three videos of distinct domains indicate a significant difference in annotation effort from a video perspective, yet no significance regarding utilized tools. Further, we determine a significant correlation between annotation time and accuracy.

Cite

CITATION STYLE

APA

Kletz, S., Leibetseder, A., & Schoeffmann, K. (2019). A comparative study of video annotation tools for scene understanding: Yet (not) another annotation tool. In Proceedings of the 10th ACM Multimedia Systems Conference, MMSys 2019 (pp. 133–144). Association for Computing Machinery, Inc. https://doi.org/10.1145/3304109.3306223

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free