Famous content using the Metaverse concept allows users to freely place objects in a world space without constraints. To render various high-resolution objects placed by users in real-time, various algorithms exist, such as view frustum culling, visibility culling and occlusion culling. These algorithms selectively remove objects outside the camera’s view and eliminate an object that is too small to render. However, these methods require additional operations to select objects to cull, which can slowdown the rendering speed in a world scene with massive number of objects. This paper introduces an object-culling technique using vertex chunk to render a massive number of objects in real-time. This method compresses the bounding boxes of objects into data units called vertex chunks to reduce input data for rendering passes, and utilizes GPU parallel processing to quickly restore the data and select culled objects. This method redistributes the bottleneck that occurred in the Object’s validity determination from the GPU to the CPU, allowing for the rendering of massive objects. Previously, the existing methods performed all the object validity checks on the GPU. Therefore, it can efficiently reduce the computation time of previous methods. The experimental results showed an improvement in performance of about 15%, and it showed a higher effect when multiple objects were placed.
CITATION STYLE
Lee, E. S., & Shin, B. S. (2023). Vertex Chunk-Based Object Culling Method for Real-Time Rendering in Metaverse. Electronics (Switzerland), 12(12). https://doi.org/10.3390/electronics12122601
Mendeley helps you to discover research relevant for your work.