Abstract
The utilization of dedicated ray tracing graphics cards has revolutionized the production of stunning visual effects in real-time rendering. However, the demand for high frame rates and high resolutions remains a challenge. The pixel warping approach is a crucial technique for increasing frame rate and resolution by exploiting the spatio-temporal coherence. To this end, existing super-resolution and frame prediction methods rely heavily on motion vectors from rendering engine pipelines to track object movements. This work builds upon state-of-the-art heuristic approaches by exploring a novel adaptive recurrent frame prediction framework that integrates learnable motion vectors. Our framework supports the prediction of transparency, particles, and texture animations, with improved motion vectors that capture shading, reflections, and occlusions, in addition to geometry movements. In addition, we introduce a feature streaming neural network, dubbed FSNet, that allows for the adaptive prediction of one or multiple sequential frames. Extensive experiments against state-of-the-art methods demonstrate that FSNet can operate at lower latency with significant visual enhancements and can upscale frame rates by at least two times. This approach offers a flexible pipeline to improve the rendering frame rates of various graphics applications and devices.
Author supplied keywords
Cite
CITATION STYLE
Wu, Z., Zuo, C., Huo, Y., Yuan, Y., Peng, Y., Pu, G., … Bao, H. (2023). Adaptive Recurrent Frame Prediction with Learnable Motion Vectors. In Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3610548.3618211
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.