An asynchronous RGB-D sensor fusion framework using Monte-Carlo methods for hand tracking on a mobile robot in crowded environments

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Gesture recognition for human-robot interaction is a prerequisite for many social robotic tasks. One of the main technical difficulties is hand tracking in crowded and dynamic environments. Many existing methods have only been shown to work in clutter-free settings. This paper proposes a sensor fusion based hand tracking algorithm for crowded environments. It is shown to significantly improve the accuracy of existing hand detectors, based on depth and RGB information. The main novelties of the proposed method include: a) a Monte-Carlo RGB update process to reduce false positives; b) online skin colour learning to cope with varying skin colour, clothing and illumination conditions; c) an asynchronous update method to integrate depth and RGB information for real-time applications. Tracking performance is evaluated in a number of controlled scenarios and crowded environments. All datasets used in this work have been made publicly available. © Springer International Publishing 2013.

Cite

CITATION STYLE

APA

McKeague, S., Liu, J., & Yang, G. Z. (2013). An asynchronous RGB-D sensor fusion framework using Monte-Carlo methods for hand tracking on a mobile robot in crowded environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8239 LNAI, pp. 491–500). https://doi.org/10.1007/978-3-319-02675-6_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free