Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.
CITATION STYLE
Escorcia, V., Heilbron, F. C., Niebles, J. C., & Ghanem, B. (2016). DAPS: Deep action proposals for action understanding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9907 LNCS, pp. 768–784). Springer Verlag. https://doi.org/10.1007/978-3-319-46487-9_47
Mendeley helps you to discover research relevant for your work.