Virtual hands have a wide range of applications, including education, medical simulation, training, animation, and gaming. In education and training, they can be used to teach complex procedures or simulate realistic scenarios. This extends to medical training and therapy to simulate real-life surgical procedures and physical rehabilitation exercises. In animation, they can be used to generate believable pre-computed or real-time hand poses and grasping animations. In games, they can be used to control virtual objects and perform actions such as shooting a gun or throwing a ball. In consumer-grade VR setups, virtual hand manipulation is usually approximated by employing controller button states, which can result in unnatural final hand positions. One solution to this problem is the use of pre-recorded hand poses or auto-grasping using physics-based collision detection. However, this approach has limitations, such as not taking into account non-convex parts of objects, and can have a significant impact on performance. In this paper, we propose a new approach that utilizes a snapshot of the Signed Distance Field (SDF) of the area below the user’s hand during the grab action. By sampling this 3D matrix during the finger-bending phase, we obtain information about the distance of each finger part to the object surface. Comparing our solution to those based on discrete collision detection shows better visual results and significantly less computational impact.
CITATION STYLE
Puchalski, M., & Woźna-Szcześniak, B. (2023). Autograsping pose of virtual hand model using the Signed Distance Field real-time sampling with fine-tuning. Computer Science Research Notes, 31(1–2), 232–240. https://doi.org/10.24132/CSRN.3301.27
Mendeley helps you to discover research relevant for your work.