Automating video stream processing for inferring situations of interest has been an ongoing challenge. This problem is currently exacerbated by the volume of surveillance/monitoring videos generated. Currently, manual or context-based customized techniques are used for this purpose. On the other hand, non-procedural query specification and processing (e.g., the Structured Query Language or SQL) has been well-established, effective, scalable, and used widely. Furthermore, stream processing has extended this approach to sensor data. The focus of this work is to extend and apply well-established nonprocedural query processing techniques for inferring situations from video streams. This entails extracting appropriate information from video frames and choosing a suitable representation for expressing situations using queries. In this paper, we elaborate on what to extract, how to extract, and the data model proposed for representing the extracted data for situation analysis using queries. We focus on moving object extraction, its location in the frame, relevant features of an object, and identification of objects across frames along with algorithms and experimental results. Our long-term goal is to establish a framework for adapting stream and event processing techniques for real-time, analysis of video streams.
CITATION STYLE
Annappa, M., Chakravarthy, S., & Athitsos, V. (2016). Pre-processing of video streams for extracting queryable representation of its contents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10073 LNCS, pp. 301–311). Springer Verlag. https://doi.org/10.1007/978-3-319-50832-0_29
Mendeley helps you to discover research relevant for your work.