Abstract
State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB and Flow modalities, leaving the information-rich audio modality unexploited. Audio fusion has been explored for the related but an arguably easier problem of trimmed (clip-level) action recognition. However, TAL poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL. To the best of our knowledge, our work is the first to jointly consider audio and video modalitie for supervised TAL. We experimentally show that our schemes consistently improve performance for the stateof-the-art video-only TAL approaches. Specifically, they help achieve a new state-of-the-art performance on large-scale benchmark datasets - ActivityNet-1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5). Our experiments include ablations involving multiple fusion schemes, modality combinations, and TAL architectures. Our code, models, and associated data are available at https://github.com/skelemoa/tal-hmo.
Author supplied keywords
Cite
CITATION STYLE
Bagchi, A., Mahmood, J., Fernandes, D., & Sarvadevabhatla, R. K. (2022). Hear Me out: Fusional Approaches for Audio Augmented Temporal Action Localization. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 144–154). Science and Technology Publications, Lda. https://doi.org/10.5220/0010832700003124
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.