A multi-robot cognitive sharing system using audio and video sensors

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a multi-robot system that integrates a visionbased navigation system with a non-speech-based audio system for the purpose of sorting objects. Here we use two separate robotic systems each utilizing a different sensor type: video and audio. We propose using vision-based sensors with audiobased sensors because a single sensor such as a video sensor may not understand the entire environment. Additionally, robots can come to inaccurate conclusions based on their observations so we increase the accuracy of the system by interpreting the input from multiple robots using cognitive sharing. Therefore, cooperation of multiple robots using multiple sensors provides a better understanding of the environment. The results show the effectiveness of the multi-robot cognitive sharing system.

Cite

CITATION STYLE

APA

McGibney, D., Morioka, R., Sekiyama, K., Mukai, H., & Fukuda, T. (2014). A multi-robot cognitive sharing system using audio and video sensors. In Springer Tracts in Advanced Robotics (Vol. 104, pp. 397–408). Springer Verlag. https://doi.org/10.1007/978-3-642-55146-8_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free