Bayesian fusion of auditory and visual spatial cues during fixation and saccade in humanoid robot

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, the Bayesian fusion of auditory and visual spatial cues has been implemented in a humanoid robot aiming to increase the accuracy of localization, given a situation that an audiovisual stimulus was presented. The performance of auditory and visual localization was tested under two conditions: fixation and saccade. In this experiment, we proved that saccade did greatly reduce the accuracy of auditory localization in the humanoid robot. The Bayesian model became not reliable when the results of auditory and visual localization were not reliable, particularly during saccade. During the tests, localization in two conditions (saccade onset and changing of direction of motion) has been ignored and only azimuth position has been considered. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Wong, W. K., Neoh, T. M., Loo, C. K., & Ong, C. P. (2009). Bayesian fusion of auditory and visual spatial cues during fixation and saccade in humanoid robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5506 LNCS, pp. 1103–1109). https://doi.org/10.1007/978-3-642-02490-0_134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free