We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an interaction graph that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to "clean-up"existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data.
CITATION STYLE
Zhang, Y., Gopinath, D., Ye, Y., Hodgins, J., Turk, G., & Won, J. (2023). Simulation and Retargeting of Complex Multi-Character Interactions. In Proceedings - SIGGRAPH 2023 Conference Papers. Association for Computing Machinery, Inc. https://doi.org/10.1145/3588432.3591491
Mendeley helps you to discover research relevant for your work.