Nlook: A computational attention model for robot vision

N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.

Cite

CITATION STYLE

APA

Heinen, M. R., & Engel, P. M. (2009). Nlook: A computational attention model for robot vision. Journal of the Brazilian Computer Society, 15(3), 3–17. https://doi.org/10.1007/bf03194502

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free