Multiple-activity human body tracking in unconstrained environments

14Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a method for human full-body pose tracking from measurements of wearable inertial sensors. Since the data provided by such sensors is sparse, noisy and often ambiguous, we use a compound prior model of feasible human poses to constrain the tracking problem. Our model consists of several low-dimensional, activity-specific motion models and an efficient, sampling-based activity switching mechanism. We restrict the search space for pose tracking by means of manifold learning. Together with the portability of wearable sensors, our method allows us to track human full-body motion in unconstrained environments. In fact, we are able to simultaneously classify the activity a person is performing and estimate the full-body pose. Experiments on movement sequences containing different activities show that our method can seamlessly detect activity switches and precisely reconstruct full-body pose from the data of only six wearable inertial sensors. © Springer-Verlag Berlin Heidelberg 2010.

Cite

CITATION STYLE

APA

Schwarz, L. A., Mateus, D., & Navab, N. (2010). Multiple-activity human body tracking in unconstrained environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6169 LNCS, pp. 192–202). https://doi.org/10.1007/978-3-642-14061-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free