Object discovery using CNN features in egocentric videos

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semisupervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.

Cite

CITATION STYLE

APA

Bolaños, M., Garolera, M., & Radeva, P. (2015). Object discovery using CNN features in egocentric videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9117, pp. 67–74). Springer Verlag. https://doi.org/10.1007/978-3-319-19390-8_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free