There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large. This makes end-to-end supervised training of a recognition system impractical as no training set is practically able to encompass the entire label set. In this paper, we present an approach to fine-grained recognition that models activities as compositions of dynamic action signatures. This compositional approach allows us to reframe fine-grained recognition as zero-shot activity recognition, where a detector is composed “on the fly” from simple first-principles state machines supported by deep-learned components. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also extend this method to form a unique framework for zero-shot joint segmentation and classification of activities in video and demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can use off-the-shelf object detectors to recognize activities in completely de-novo settings with no additional training.
CITATION STYLE
Kim, T. S., Jones, J., Peven, M., Xiao, Z., Bai, J., Zhang, Y., … Hager, G. D. (2021). DASZL: Dynamic Action Signatures for Zero-shot Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 3A, pp. 1817–1826). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i3.16276
Mendeley helps you to discover research relevant for your work.