Automating expressive locomotion generation

6Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a system for expressive locomotion generation that takes as input a set of sample locomotion clips and a motion path. Significantly, the system only requires a single sample of straight-path locomotion for each style modeled and can produce output locomotion for an arbitrary path with arbitrary motion transition points. For efficient locomotion generation, we represent each sample with a loop sequence which encapsulates its key style and utilize these sequences throughout the synthesis process. Several techniques are applied to automate the synthesis: foot-plant detection from unlabeled samples, estimation of an adaptive blending length for a natural style change, and a post-processing step for enhancing the physical realism of the output animation. Compared to previous approaches, the system requires significantly less data and manual labor, while supporting a large range of styles. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Kim, Y., & Neff, M. (2012). Automating expressive locomotion generation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7145 LNCS, 48–61. https://doi.org/10.1007/978-3-642-29050-3_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free