Visual Memorability for Robotic Interestingness via Unsupervised Online Learning

11Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we explore the problem of interesting scene prediction for mobile robots. This area is currently underexplored but is crucial for many practical applications such as autonomous exploration and decision making. Inspired by industrial demands, we first propose a novel translation-invariant visual memory for recalling and identifying interesting scenes, then design a three-stage architecture of long-term, short-term, and online learning. This enables our system to learn human-like experience, environmental knowledge, and online adaption, respectively. Our approach achieves much higher accuracy than the state-of-the-art algorithms on challenging robotic interestingness datasets.

Cite

CITATION STYLE

APA

Wang, C., Wang, W., Qiu, Y., Hu, Y., & Scherer, S. (2020). Visual Memorability for Robotic Interestingness via Unsupervised Online Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12347 LNCS, pp. 52–68). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58536-5_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free