Fast Approximate Light Field Volume Rendering: Using Volume Data to Improve Light Field Synthesis via Convolutional Neural Networks

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Volume visualization pipelines have the potential to be improved by the use of light field display technology, allowing enhanced perceptual qualities. However, these displays will require a significant increase in pixels to be rendered at interactive rates. Volume rendering makes use of ray-tracing techniques, which makes this resolution increase challenging for modest hardware. We demonstrate in this work an approach to synthesize the majority of the viewpoints in the light field using a small set of rendered viewpoints via a convolutional neural network. We show that synthesis performance can be further improved by allowing the network access to the volume data itself. To perform this efficiently, we propose a range of approaches and evaluate them against two datasets collected for this task. These approaches all improve synthesis performance and avoid the use of expensive 3D convolutional operations. With this approach, we improve light field volume rendering times by a factor of 8 for our test case.

Cite

CITATION STYLE

APA

Bruton, S., Ganter, D., & Manzke, M. (2020). Fast Approximate Light Field Volume Rendering: Using Volume Data to Improve Light Field Synthesis via Convolutional Neural Networks. In Communications in Computer and Information Science (Vol. 1182 CCIS, pp. 338–361). Springer. https://doi.org/10.1007/978-3-030-41590-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free