Generating summary videos based on visual and sound information from movies

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Vast quantities of video data are now widely available and easily accessible; because of the many videos that users encounter, video summary technology is needed in order to help users find videos that match their preferences. This study focuses on movies to propose a method for extracting important scenes based on visual and sound information, and verifies the degree of harmony of the extracted scenes. The video segments thus characterized can be used to generate summary videos.

Cite

CITATION STYLE

APA

Imaji, Y., & Fujisawa, M. (2015). Generating summary videos based on visual and sound information from movies. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9172, pp. 190–203). Springer Verlag. https://doi.org/10.1007/978-3-319-20612-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free