Multimodal Object Recognition Module for Social Robots

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sensor fusion techniques are able to increase robustness and accuracy over data provided by isolated sensors. Fusion can be performed at a low level, creating shared data representations from multiple sensory inputs, or at a high level, checking consistency and similarity of objects provided by different sources. These last techniques are more prone to discard perceived objects due to overlapping or partial occlusions, but they are usually simpler, and more scalable. Hence, they are more adequate when data gathering is the key requirement, while safety is not compromised, computational resources may be limited and it is important to easily incorporate new sensors (e.g. monitorization in smart environments or object recognition for social robots). This paper proposes a novel perception integrator module that uses low complexity algorithms to implement fusion, tracking and forgetting mechanisms. Its main characteristics are simplicity, adaptability and scalability. The system has been integrated in a social robot and employed to achieve multimodal object and person recognition. Experimental results show the adequacy of the solution in terms of detection and recognition rates, integrability into the constrained resources of a robot, and adaptability to different sensors, detection priorities and scenarios.

Cite

CITATION STYLE

APA

Cruces, A., Tudela, A., Romero-Garcés, A., & Bandera, J. P. (2023). Multimodal Object Recognition Module for Social Robots. In Lecture Notes in Networks and Systems (Vol. 590 LNNS, pp. 489–501). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-21062-4_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free