Self-adaptive context aware audio localization for robots using parallel cerebellar models

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An audio sensor system is presented that uses multiple cerebellar models to determine the acoustic environment in which a robot is operating, allowing the robot to select appropriate models to calibrate its audio-motor map for the detected environment. There are two key areas of novelty here. One is the application of cerebellar models in a new context, that is auditory sensory input. The second is the idea of applying a multiple models approach to motor control to a sensory problem rather than a motor problem. The use of the adaptive filter model of the cerebellum in a variety of robotics applications has demonstrated the utility of the so-called cerebellar chip. This paper combines the notion of cerebellar calibration of a distorted audio-motor map with the use of multiple parallel models to predict the context (acoustic environment) within which the robot is operating. The system was able to correctly predict seven different acoustic contexts in almost 70% of cases tested.

Cite

CITATION STYLE

APA

Baxendale, M. D., Pearson, M. J., Nibouche, M., Secco, E. L., & Pipe, A. G. (2017). Self-adaptive context aware audio localization for robots using parallel cerebellar models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10454 LNAI, pp. 76–88). Springer Verlag. https://doi.org/10.1007/978-3-319-64107-2_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free