VCPoser: Interactive Pose Generation of Virtual Characters Corresponding to Human Pose Input

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Virtual characters (VCs) play a significant role in the entertainment industry, and AI-driven VCs are being developed to enable interaction with users. People are attracted to these VCs, resulting in a demand for them to co-exist in the same world. An approach to allow recording of the memories with the VCs is to capture videos or photos with them, where users are usually required to adapt their poses to the pre-rendered VC's action. To allow a more seamless collaboration with VCs in photography scenarios, we propose VCPoser, which enables VCs to adapt their pose to the pose of the user. We created a deep neural network-based system that predicts a VC's pose using the user's pose data as input by learning the paired pose data. Our quantitative evaluations and user studies demonstrate that our system can predict and generate poses of VCs and allow them to be combined next to the posing user in a photo. We also provide an analysis of the human mindsets of paired poses for a better understanding of them and to share insights for aesthetic pose design.

Cite

CITATION STYLE

APA

Kono, M., Morimoto, N., & Kaku, R. (2022). VCPoser: Interactive Pose Generation of Virtual Characters Corresponding to Human Pose Input. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST. Association for Computing Machinery. https://doi.org/10.1145/3562939.3565640

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free