Multi-sensored vision for autonomous production of personalized video summaries

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Democratic and personalized production of multimedia content is a challenge for content providers. In this paper, members of the FP7 APIDIS consortium explain how it is possible to address this challenge by building on computer vision tools to automate the collection and distribution of audiovisual content. In a typical application scenario, a network of cameras covers the scene of interest, and distributed analysis and interpretation of the scene are exploited to decide what to show or not to show about the event, so as to edit a video from of a valuable subset of the streams provided by each individual camera. Generation of personalized summaries through automatic organization of stories is also considered. In final, the proposed technology provides practical solutions to a wide range of applications, such as personalized access to local sport events through a web portal, cost-effective and fully automated production of content for small-audience, or automatic log in of annotations. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.

Cite

CITATION STYLE

APA

Chen, F., Delannay, D., & De Vleeschouwer, C. (2012). Multi-sensored vision for autonomous production of personalized video summaries. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (Vol. 60 LNICST, pp. 113–122). https://doi.org/10.1007/978-3-642-35145-7_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free