Using visual feedback to improve hand movement accuracy in confined-occluded spaces in virtual reality

13Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Accurate and informative hand-object collision feedback is of vital importance for hand manipulation in virtual reality (VR). However, to our best knowledge, the hand movement performance in fully-occluded and confined VR spaces under visual collision feedback is still under investigation. In this paper, we firstly studied the effects of several popular visual feedback of hand-object collision on hand movement performance. To test the effects, we conducted a within-subject user study (n=18) using a target-reaching task in a confined box. Results indicated that users had the best task performance with see-through visualization, and the most accurate movement with the hybrid of proximity-based gradation and deformation. By further analysis, we concluded that the integration of see-through visualization and proximity-based visual cue could be the best compromise between the speed and accuracy for hand movement in the enclosed VR space. On the basis, we designed a visual collision feedback based on projector decal,which incorporates the advantages of see-through and color gradation. In the end, we present demos of potential usage of the proposed visual cue.

Cite

CITATION STYLE

APA

Wang, Y., Hu, Z., Yao, S., & Liu, H. (2023). Using visual feedback to improve hand movement accuracy in confined-occluded spaces in virtual reality. Visual Computer, 39(4), 1485–1501. https://doi.org/10.1007/s00371-022-02424-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free