Multimodal background modeling using RGB-depth features

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a method of background subtraction that uses multimodal information, specifically depth and appearance cues, to robustly separate the foreground in dynamic indoor scenes. To this end, RGB-Depth data from a Microsoft Kinect sensor are exploited. We propose an extension of one from the most effective technique for background modeling in real time: Kernel Density Estimation with Fast Gauss Transform technique. Experimental results show that our proposed deals well with gradual/sudden illumination changes, shadows and dynamic backgrounds.

Cite

CITATION STYLE

APA

Trabelsi, R., Smach, F., Jabri, I., & Bouallegue, A. (2014). Multimodal background modeling using RGB-depth features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8827, pp. 884–892). Springer Verlag. https://doi.org/10.1007/978-3-319-12568-8_107

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free