Dynamic concept composition for zero-example event detection

38Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. birthday party) can be described by multiple mid-level semantic concepts (e.g. "blowing candle", "birthday cake"). Towards this goal, we first pre-Train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with freeform text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.

Cite

CITATION STYLE

APA

Chang, X., Yang, Y., Long, G., Zhang, C., & Hauptmann, A. G. (2016). Dynamic concept composition for zero-example event detection. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 3464–3470). AAAI press. https://doi.org/10.1609/aaai.v30i1.10474

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free