Neuromorphic audio-visual sensor fusion on a sound-localizing robot

18Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment. © 2012 Chan, Jin and van Schaik.

Cite

CITATION STYLE

APA

Chan, V. Y. S., Jin, C. T., & van Schaik, A. (2012). Neuromorphic audio-visual sensor fusion on a sound-localizing robot. Frontiers in Neuroscience, (FEB). https://doi.org/10.3389/fnins.2012.00021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free