From artifact to content source: using multimodality in video to support personalized recomposition

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video content is being produced in ever increasing quantities. It is practically impossible for any user to see every piece of video which could be useful to them. We need to look at video content differently. Videos are composed of a set of features, namely the moving video track, the audio track and other derived features, such as a transcription of the spoken words. These different features have the potential to be recomposed to create new video offerings. However, a key step in achieving such recomposition is the appropriate decomposition of those features into useful assets. Video artifacts can therefore be considered a type of multimodal source which may be used to support personalized and contextually aware recomposition. This work aims to propose and validate an approach which will convert a video from a single artifact into a diverse query-able content source.

Cite

CITATION STYLE

APA

Salim, F. A. (2015). From artifact to content source: using multimodality in video to support personalized recomposition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9146, pp. 391–396). Springer Verlag. https://doi.org/10.1007/978-3-319-20267-9_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free