Gaze-directed volume rendering

48Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

We direct our gaze at an object by rotating our eyes or head until the object's projection falls on the fovea, a small region of enhanced spatial acuity near the center of the retina. In this paper, we explore methods for encorporating gaze direction into rendering algorithms. This approach permits generation of images exhibiting continuously varying resolution, and allows these images to be displayed on conventional television monitors. Specifically, we describe a ray tracer for volume data in which the number of rays cast per unit area on the image plane and the number of samples drawn per unit length along each ray are functions of local retinal acuity. We also describe an implementation using 2D and 3D mip maps, an eye tracker, and the Pixel-Planes 5 massively parallel raster display system. Pending completion of Pixel-Planes 5 in the spring of 1990, we have written a simulator on a Stellar graphics supercomputer. Preliminary results indicate that while users are aware of the variable-resolution structure of the image, the high-resolution sweet spot follows their gaze well and promises to be useful in practice.

Cite

CITATION STYLE

APA

Levoy, M., & Whitaker, R. (1990). Gaze-directed volume rendering. In Proceedings of the 1990 Symposium on Interactive 3D Graphics, I3D 1990 (pp. 217–223). Association for Computing Machinery. https://doi.org/10.1145/91385.91449

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free