Egocentric Activity Recognition and Localization on a 3D Map

2Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given a video captured from a first person perspective and the environment context of where the video is recorded, can we recognize what the person is doing and identify where the action occurs in the 3D space? We address this challenging problem of jointly recognizing and localizing actions of a mobile user on a known 3D map from egocentric videos. To this end, we propose a novel deep probabilistic model. Our model takes the inputs of a Hierarchical Volumetric Representation (HVR) of the 3D environment and an egocentric video, infers the 3D action location as a latent variable, and recognizes the action based on the video and contextual cues surrounding its potential locations. To evaluate our model, we conduct extensive experiments on the subset of Ego4D dataset, in which both human naturalistic actions and photo-realistic 3D environment reconstructions are captured. Our method demonstrates strong results on both action recognition and 3D action localization across seen and unseen environments. We believe our work points to an exciting research direction in the intersection of egocentric vision, and 3D scene understanding.

Cite

CITATION STYLE

APA

Liu, M., Ma, L., Somasundaram, K., Li, Y., Grauman, K., Rehg, J. M., & Li, C. (2022). Egocentric Activity Recognition and Localization on a 3D Map. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13673 LNCS, pp. 621–638). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19778-9_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free