Although biomimetic autonomous robotics relies on the massively parallel architecture of the brain, the key issue is to temporally organize behaviour. The distributed representation of the sensory information has to be coherently processed to generate relevant actions. In the visual domain, we propose here a model of visual exploration of a scene by the means of localized computations in neural populations whose architecture allows the emergence of a coherent behaviour of sequential scanning of salient stimuli. It has been implemented on a real robotic platform exploring a moving and noisy scene including several identical targets. © 2005 Springer-Verlag Berlin/Heidelberg.
CITATION STYLE
Vitay, J., Rougier, N. P., & Alexandre, F. (2005). A distributed model of spatial visual attention. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3575 LNAI, pp. 54–72). https://doi.org/10.1007/11521082_4
Mendeley helps you to discover research relevant for your work.