Learnable pooling methods for video classification

4Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce modifications to state-of-the-art approaches to aggregating local video descriptors by using attention mechanisms and function approximations. Rather than using ensembles of existing architectures, we provide an insight on creating new architectures. We demonstrate our solutions in the “The 2nd YouTube-8M Video Understanding Challenge”, by using frame-level video and audio descriptors. We obtain testing accuracy similar to the state of the art, while meeting budget constraints, and touch upon strategies to improve the state of the art. Model implementations are available in https://github.com/pomonam/LearnablePoolingMethods.

Cite

CITATION STYLE

APA

Kmiec, S., Bae, J., & An, R. (2019). Learnable pooling methods for video classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11132 LNCS, pp. 229–238). Springer Verlag. https://doi.org/10.1007/978-3-030-11018-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free