Video-annotated augmented reality assembly tutorials

29Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a system for generating and visualizing interactive 3D Augmented Reality tutorials based on 2D video input, which allows viewpoint control at runtime. Inspired by assembly planning, we analyze the input video using a 3D CAD model of the object to determine an assembly graph that encodes blocking relationships between parts. Using an assembly graph enables us to detect assembly steps that are otherwise difficult to extract from the video, and generally improves object detection and tracking by providing prior knowledge about movable parts. To avoid information loss, we combine the 3D animation with relevant parts of the 2D video so that we can show detailed manipulations and tool usage that cannot be easily extracted from the video. To further support user orientation, we visually align the 3D animation with the real-world object by using texture information from the input video. We developed a presentation system that uses commonly available hardware to make our results accessible for home use and demonstrate the effectiveness of our approach by comparing it to traditional video tutorials.

Cite

CITATION STYLE

APA

Yamaguchi, M., Mori, S., Mohr, P., Tatzgern, M., Stanescu, A., Saito, H., & Kalkofen, D. (2020). Video-annotated augmented reality assembly tutorials. In UIST 2020 - Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 1010–1022). Association for Computing Machinery, Inc. https://doi.org/10.1145/3379337.3415819

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free