Creating personalized avatars

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Digital heritage applications use virtual characters extensively to populate reconstructions of heritage sites in virtual and augmented reality. Creating these believable characters requires a lot of effort. The characters have to be modelled, textured, rigged and animated. In this chapter, we present a framework that captures a point cloud of a real user using multiple depth cameras and subsequently deforms a template mesh to match the captured geometry. The topology of the template mesh is preserved during the deformation process. We compare the measurements of limb lengths and body part ratios with actual corresponding anthropological measurements from the real user, in order to validate our system. Furthermore, we use a single depth camera to capture the motion of a real performer that we can then use to animate the mesh. This semi-automatic process only requires commodity depth cameras (Microsoft Kinect cameras) and no other specialized hardware. We also present extensions to available open-source animation authoring environment in Blender that allow us to synthesize character animation from prerecorded motion data. We then briefly discuss the challenges involved in enhancing the appearance of the characters by the physically based animation of virtual garments.

Cite

CITATION STYLE

APA

Mashalkar, J., & Chaudhuri, P. (2018). Creating personalized avatars. In Digital Hampi: Preserving Indian Cultural Heritage (pp. 283–298). Springer Singapore. https://doi.org/10.1007/978-981-10-5738-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free