Abstract
In this work we propose to exploit context sensor data for analyzing user generated videos. Firstly, we perform a low-level indexing of the recorded media with the instantaneous compass orientations of the recording device. Subsequently, we exploit the low level indexing to obtain a higher level indexing for discovering camera panning movements, classifying them, and for identifying the Region of Interest (ROI) of the recorded event. Thus, we extract information about the content without performing content analysis but by leveraging sensor data analysis. Furthermore, we develop an automatic remixing system that exploits the obtained high-level indexing for producing a video remix. We show that the proposed sensor-based analysis can correctly detect and classify camera panning and identify the ROI; in addition, we provide examples of their application to automatic video remixing. © 2012 Springer-Verlag.
Cite
CITATION STYLE
Cricri, F., Curcio, I. D. D., Mate, S., Dabov, K., & Gabbouj, M. (2012). Sensor-based analysis of user generated video for multi-camera video remixing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7131 LNCS, pp. 255–265). https://doi.org/10.1007/978-3-642-27355-1_25
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.