This paper proposes a new linguistic-perceptual event model tailoring to spatio-temporal event detection and conceptual-visual personalized retrieval of sports video sequences. The major contributions of the proposed model are hierarchical structure, independence between linguistic and perceptual part, and ability of capturing temporal information of sports events. Thanks to these advanced contributions, it is very easy to upgrade model events from simple to complex levels either by self-studying from inner knowledge or by being taught from plug-in additional knowledge. Thus, the proposed model not only can work well in unwell structured environments but also is able to adapt itself to new domains without the need (or with a few modification) for external re-programming, re-configuring and re-adjusting. Thorough experimental results demonstrate that events are modeled and detected with high accuracy and automation, and users' expectation of personalized retrieval is highly satisfied. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Dao, M. S., Nath, S. I., & Babaguichi, N. (2009). A new linguistic-perceptual event model for spatio-temporal event detection and personalized retrieval of sports video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5716 LNCS, pp. 594–603). https://doi.org/10.1007/978-3-642-04146-4_64
Mendeley helps you to discover research relevant for your work.