We present an online fully unsupervised approach for auto- matically extracting video guides of how objects are used from wearable gaze trackers worn bymultiple users. Given egocentric video and eye gaze from multiple users performing tasks, the system discovers task-relevant objects and automatically extracts guidance videos on how these objects have been used. In the assistive mode, the paper proposes a method for selecting a suitable video guide to be displayed to a novice user indi- cating how to use an object, purely triggered by the user’s gaze. The approach is tested on a variety of daily tasks ranging from opening a door, to preparing coffee and operating a gym machine.
CITATION STYLE
Damen, D., Haines, O., Leelasawassuk, T., Calway, A., & Mayol-Cuevas, W. (2015). Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage (pp. 481–492). https://doi.org/10.1007/978-3-319-16199-0_34
Mendeley helps you to discover research relevant for your work.