Head motion signatures from egocentric videos

11Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The proliferation of surveillance cameras has created new privacy concerns as people are captured daily without explicit consent, and the video is kept in databases for a very long time. With the increasing popularity of wearable cameras like Google Glass the problem is set to increase substantially. An important computer vision task is to enable a person (“subject”) to query the video database (“observer”) whether he/she has been captured on the video. Following a positive answer, the subject may request a copy of the video, or ask to be “forgotten” by erasing this video from the database. Two properties such queries should possess are: (i) The query should not reveal more information about the subject, further breaching his privacy. (ii) The query should certify that the subject is indeed the captured person before sending him the video or erasing it. This paper presents a possible solution when the subject has a head mounted camera, e.g. Google Glass. We propose to create a unique signature, based on pattern of head motion, that could identify that the subject is indeed the person seen in a video. Unlike traditional biometric methods (face, gait recognition etc.), the proposed signature is temporally volatile, and can identify the subject only at a particular time. It is of no use for any other place or time.

Cite

CITATION STYLE

APA

Poleg, Y., & Arora, C. (2015). Head motion signatures from egocentric videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9005, pp. 315–329). Springer Verlag. https://doi.org/10.1007/978-3-319-16811-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free