Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a multi-modal approach to manipulating smart home devices in a smart home environment simulated in virtual reality. Our multi-modal approach seeks to determine the user’s intent in the form of the user’s target smart home device and the desired action for that device to perform. We do this by examining information from two main modalities: spoken utterance and spatial information (such as gestures, positions, hand interactions, etc.). Our approach makes use of spoken utterance, spatial information, and additional information such as the device’s state to predict the user’s intent. Since the information contained in the user’s utterance and the spatial information can be disjoint or complementary to one another, we process the two sources of information in parallel using multiple machine learning models to determine intent. The results of these models are ensembled to produce our final prediction results. Aside from the proposed approach, we also discuss our prototype and discuss our initial findings.

Cite

CITATION STYLE

APA

Yao, P., Hou, Y., He, Y., Cheng, D., Hu, H., & Zyda, M. (2022). Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13317 LNCS, pp. 94–112). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-05939-1_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free