End-to-End Joint Semantic Segmentation of Actors and Actions in Video

7Citations
Citations of this article
166Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Traditional video understanding tasks include human action recognition and actor/object semantic segmentation. However, the combined task of providing semantic segmentation for different actor classes simultaneously with their action class remains a challenging but necessary task for many applications. In this work, we propose a new end-to-end architecture for tackling this task in videos. Our model effectively leverages multiple input modalities, contextual information, and multitask learning in the video to directly output semantic segmentations in a single unified framework. We train and benchmark our model on the Actor-Action Dataset (A2D) for joint actor-action semantic segmentation, and demonstrate state-of-the-art performance for both segmentation and detection. We also perform experiments verifying our approach improves performance for zero-shot recognition, indicating generalizability of our jointly learned feature space.

Cite

CITATION STYLE

APA

Ji, J., Buch, S., Soto, A., & Niebles, J. C. (2018). End-to-End Joint Semantic Segmentation of Actors and Actions in Video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11208 LNCS, pp. 734–749). Springer Verlag. https://doi.org/10.1007/978-3-030-01225-0_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free