Empirical study of audio-visual features fusion for gait recognition

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The goal of this paper is to evaluate how the fusion of audio and visual features can help in the challenging task of people identification based on their gait (i.e. the way they walk), or gait recognition. Most of previous research on gait recognition has focused on designing visual descriptors, mainly on binary silhouettes, or building sophisticated machine learning frameworks. However, little attention has been paid to audio patterns associated to the action of walking. So, we propose and evaluate here a multimodal system for gait recognition. The proposed approach is evaluated on the challenging ‘TUM GAID’ dataset, which contains audio recordings in addition to image sequences. The experimental results show that using late fusion to combine two kinds of trackletbased visual features with audio features improves the state-of-the-art results on the standard experiments defined on the dataset.

Cite

CITATION STYLE

APA

Castro, F. M., Marín-Jiménez, M. J., & Guil, N. (2015). Empirical study of audio-visual features fusion for gait recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9256, pp. 727–739). Springer Verlag. https://doi.org/10.1007/978-3-319-23192-1_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free