We present WalkNet, an interactive agent walking movement controller based on neural networks. WalkNet supports controlling the agent’s walking movements with high-level factors that are semantically meaningful, providing an interface between the agent and its movements in such a way that the characteristics of the movements can be directly determined by the internal state of the agent. The controlling factors are defined across the dimensions of planning, affect expression, and personal movement signature. WalkNet employs Factored, Conditional Restricted Boltzmann Machines to learn and generate movements. We train the model on a corpus of motion capture data that contains movements from multiple human subjects, multiple affect expressions, and multiple walking trajectories. The generation process is real-time and is not memory intensive. WalkNet can be used both in interactive scenarios in which it is controlled by a human user and in scenarios in which it is driven by another AI component.
CITATION STYLE
Alemi, O., & Pasquier, P. (2017). WalkNet: A neural-network-based interactive walking controller. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10498 LNAI, pp. 15–24). Springer Verlag. https://doi.org/10.1007/978-3-319-67401-8_2
Mendeley helps you to discover research relevant for your work.