Action anticipation with RBF kernelized feature mapping RNN

14Citations
Citations of this article
132Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN. Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future features into a simple multilayer perceptron facilitated with an RBF kernel layer, we are able to accurately predict the action in the video. In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior state-of-the-art for action anticipation.

Cite

CITATION STYLE

APA

Shi, Y., Fernando, B., & Hartley, R. (2018). Action anticipation with RBF kernelized feature mapping RNN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11214 LNCS, pp. 305–322). Springer Verlag. https://doi.org/10.1007/978-3-030-01249-6_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free