Egocentric visual event classification with location-based priors

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a method for visual classification of actions and events captured from an egocentric point of view. The method tackles the challenge of a moving camera by creating deformable graph models for classification of actions. Action models are learned from low resolution, roughly stabilized difference images acquired using a single monocular camera. In parallel, raw images from the camera are used to estimate the user's location using a visual Simultaneous Localization and Mapping (SLAM) system. Action-location priors, learned using a labeled set of locations, further aid action classification and bring events into context. We present results on a dataset collected within a cluttered environment, consisting of routine manipulations performed on objects without tags. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Sundaram, S., & Mayol-Cuevas, W. W. (2010). Egocentric visual event classification with location-based priors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6454 LNCS, pp. 596–605). https://doi.org/10.1007/978-3-642-17274-8_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free