Context-Adaptive Visual Cues for Safe Navigation in Augmented Reality Using Machine Learning

13Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Augmented reality (AR) using head-mounted displays (HMDs) is a powerful tool for user navigation. Existing approaches usually display navigational cues that are constantly visible (always-on). This limits real-world application, as visual cues can mask safety-critical objects. To address this challenge, we develop a context-adaptive system for safe navigation in AR using machine learning. Specifically, our system utilizes a neural network, trained to predict when to display visual cues during AR-based navigation. For this, we conducted two user studies. In User Study 1, we recorded training data from an AR HMD. In User Study 2, we compared our context-adaptive system to an always-on system. We find that our context-adaptive system enables task completion speeds on a par with the always-on system, promotes user autonomy, and facilitates safety through reduced visual noise. Overall, participants expressed their preference for our context-adaptive system in an industrial workplace setting.

Cite

CITATION STYLE

APA

Seeliger, A., Weibel, R. P., & Feuerriegel, S. (2024). Context-Adaptive Visual Cues for Safe Navigation in Augmented Reality Using Machine Learning. International Journal of Human-Computer Interaction, 40(3), 761–781. https://doi.org/10.1080/10447318.2022.2122114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free