Automatic information positioning scheme in AR-assisted maintenance based on visual saliency

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a novel automatic augmentation of pertinent information for Augmented Reality (AR) assisted maintenance based on a biologically inspired visual saliency model. In AR-assisted maintenance, the human operator performs routine service, repair, assembly and disassembly tasks with the aid of information displayed virtually. Appropriate positioning of virtual information is crucial because it has to be visible without hindering the normal maintenance operation at the same time. As opposed to conventional positioning approaches based on discretization and clustering of the scene, this paper proposes a novel application of a graph-based visual saliency model to enable automatic positioning of virtual information. Particularly, this research correlates the types of information with the levels of activation on the resulting visual saliency map for different scenarios. Real life examples of the proposed methodology are used to evaluate the feasibility of using visual saliency for information positioning in AR applications.

Cite

CITATION STYLE

APA

Chang, M. M. L., Ong, S. K., & Nee, A. Y. C. (2016). Automatic information positioning scheme in AR-assisted maintenance based on visual saliency. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9768, pp. 453–462). Springer Verlag. https://doi.org/10.1007/978-3-319-40621-3_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free