From human eye fixation to human-like autonomous artificial vision

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fitting the skills of the natural vision is an appealing perspective for artificial vision systems, especially in robotics applications where visual perception of the surrounding environment is a key requirement. Focusing on the visual attention dilemma for autonomous visual perception, in this work we propose a model for artificial visual attention combining a statistical foundation of visual saliency and a genetic optimization. The computational issue of our model relies on center-surround statistical features calculations and a nonlinear fusion of different resulting maps. Statistical foundation and bottom-up nature of the proposed model provide as well the advantage to make it usable without needing prior information as a comprehensive solid theoretical basement. The eye-fixation paradigm has been considered as evaluation benchmark providing MIT1003 and Toronto image datasets for experimental validation. The reported experimental results show scores challenging currently best algorithms used in the aforementioned field with faster execution speed of our approach.

Cite

CITATION STYLE

APA

Kachurka, V., Madani, K., Sabourin, C., & Golovko, V. (2015). From human eye fixation to human-like autonomous artificial vision. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9094, pp. 171–184). Springer Verlag. https://doi.org/10.1007/978-3-319-19258-1_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free