Visual Summarization of Scholarly Videos Using Word Embeddings and Keyphrase Extraction

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Effective learning with audiovisual content depends on many factors. Besides the quality of the learning resource’s content, it is essential to discover the most relevant and suitable video in order to support the learning process most effectively. Video summarization techniques facilitate this goal by providing a quick overview over the content. It is especially useful for longer recordings such as conference presentations or lectures. In this paper, we present a domain specific approach that generates a visual summary of video content using solely textual information. For this purpose, we exploit video annotations that are automatically generated by speech recognition and video OCR (optical character recognition). Textual information is represented by semantic word embeddings and extracted keyphrases. We demonstrate the feasibility of the proposed approach through its incorporation into the TIB AV-Portal (http://av.tib.eu/ ), which is a platform for scientific videos. The accuracy and usefulness of the generated video content visualizations is evaluated in a user study.

Cite

CITATION STYLE

APA

Zhou, H., Otto, C., & Ewerth, R. (2019). Visual Summarization of Scholarly Videos Using Word Embeddings and Keyphrase Extraction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11799 LNCS, pp. 327–335). Springer Verlag. https://doi.org/10.1007/978-3-030-30760-8_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free