SMC: Single-Stage Multi-location Convolutional Network for Temporal Action Detection

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Temporal action detection in untrimmed videos is an important and challenging visual task. State-of-the-art works always adopt a multi-stage pipeline, i.e., a class-agnostic segment proposal followed by a multi-label action classification. This pipeline is computationally slow and hard to optimize as each stage need be trained separately. Moreover, a desirable method should go beyond segment-level localization and make dense predictions with precise boundaries. We introduce a novel detection model in this paper, Single-stage Multi-location Convolutional Network (SMC), which completely eliminates the proposal generation and spatio-temporal feature resampling, and predicts frame-level action locations with class probabilities in a unified end-to-end network. Specifically, we associate a set of multi-scale default locations with each feature map cell in multiple layers, then predict the location offsets to the default locations, as well as action categories. SMC in practice is faster than the existing methods (753 FPS on a Titan X Maxwell GPU) and achieves state-of-the-art performance on THUMOS’14 and MEXaction2.

Cite

CITATION STYLE

APA

Liu, Z., Wang, Z., Zhao, Y., & Tian, Y. (2019). SMC: Single-Stage Multi-location Convolutional Network for Temporal Action Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11362 LNCS, pp. 179–195). Springer Verlag. https://doi.org/10.1007/978-3-030-20890-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free