Mixed Reality (MR) is the next evolution of human interacting with the computer as MR has the ability to combine the physical environment and digital environment and making them coexist with each other. Interaction is still a huge research area in Augmented Reality (AR) but very less in MR, this is due to current advanced MR display techniques still not robust and intuitive enough to let the user to naturally interact with 3D content. New techniques on user interaction have been widely studied, the advanced technique in interaction when the system able to invoke more than one input modalities. Multimodal interaction undertakes to deliver intuitive multiple objects manipulation with gestures. This paper discusses the multimodal interaction technique using gesture and speech which the proposed experimental setup to implement multimodal in the MR interface. The real hand gesture is combined with speech inputs in MR to perform spatial object manipulations. The paper explains the implementation stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, spatial manipulation for selection and scaling using multimodal interaction has been invoked, and this paper ends with a discussion.
CITATION STYLE
Aladin, M. Y. F., Ismail, A. W., Ismail, N. A., & Rahim, M. S. M. (2020). Object selection and scaling using multimodal interaction in mixed reality. In IOP Conference Series: Materials Science and Engineering (Vol. 979). IOP Publishing Ltd. https://doi.org/10.1088/1757-899X/979/1/012004
Mendeley helps you to discover research relevant for your work.