Deep learning-based concept detection in vitrivr

29Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents the most recent additions to the vitrivr retrieval stack, which will be put to the test in the context of the 2019 Video Browser Showdown (VBS). The vitrivr stack has been extended by approaches for detecting, localizing, or describing concepts and actions in video scenes using various convolutional neural networks. Leveraging those additions, we have added support for searching the video collection based on semantic sketches. Furthermore, vitrivr offers new types of labels for text-based retrieval. In the same vein, we have also improved upon vitrivr’s pre-existing capabilities for extracting text from video through scene text recognition. Moreover, the user interface has received a major overhaul so as to make it more accessible to novice users, especially for query formulation and result exploration.

Cite

CITATION STYLE

APA

Rossetto, L., Amiri Parian, M., Gasser, R., Giangreco, I., Heller, S., & Schuldt, H. (2019). Deep learning-based concept detection in vitrivr. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11296 LNCS, pp. 616–621). Springer Verlag. https://doi.org/10.1007/978-3-030-05716-9_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free