Using eye-gaze and visualization to augment memory a framework for improving context recognition and recall

17Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In our everyday lives, bits of important information are lost due to the fact that our brain fails to convert a large portion of short term memory into long term memory. In this paper, we propose a framework that uses an eyetracking interface to store pieces of forgotten information and present them back to the user later with an integrated head mounted display (HMD). This process occurs in three main steps, including context recognition, data storage, and augmented reality (AR) display. We demonstrate the system's ability to recall information with the example of a lost book page by detecting when the user reads the book again and intelligently presenting the last read position back to the user. Two short user evaluations show that the system can recall book pages within 40 milliseconds, and that the position where a user left off can be calculated with approximately 0.5 centimeter accuracy. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Orlosky, J., Toyama, T., Sonntag, D., & Kiyokawa, K. (2014). Using eye-gaze and visualization to augment memory a framework for improving context recognition and recall. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8530 LNCS, pp. 282–291). Springer Verlag. https://doi.org/10.1007/978-3-319-07788-8_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free