Abstract
Cybersickness is one of the problems that undermines user experience in virtual reality. While many studies are trying to find ways to alleviate cybersickness, only a few have considered cybersickness through multimodal perspectives. In this paper, we propose a multimodal, attention-based cybersickness prediction model. Our model was trained based on a total of 24,300 seconds of data from 27 participants and yielded the F1-score of 0.82. Our study results highlight the potential to model cybersickness from multimodal sensory information with a high level of performance and suggest that the model should be extended using additional, diverse samples.
Cite
CITATION STYLE
Jeong, D., & Han, K. (2022). Leveraging multimodal sensory information in cybersickness prediction. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST. Association for Computing Machinery. https://doi.org/10.1145/3562939.3565667
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.