Virtual Reality (VR) contents include 360-180- seamless panoramic videos, stitched from multiple overlapping video streams. Many commercial VR devices use a two-camera rig to capture VR content. These devices have increased radial distortion effects along the seams of the stitching boundary. Moreover, a fixed number of cameras in the camera rig makes the VR system non-scalable. Since the VR experience is directly related to the quality of VR content, it is desirable to create a VR framework that is scalable in terms of the number of cameras attached to the camera-rig and has better geometric and photometric quality. In this paper, we propose an end-to-end VR system for stitching full spherical content. The VR system is composed of camera rig calibration and stitching modules. The calibration module performs a geometric alignment of the camera rig. The stitching module transforms texture from the camera or video stream into the VR stream using lookup tables (LUTs) and blend masks (BMs). In this work, our main contribution is the improvement of stitching quality. First, we propose a feature preprocessing method that filters out inconsistent, error-prone features. Secondly, we propose a geometric alignment method that outperforms state-of-the-art VR stitching solutions.We tested our system on diverse image sets and obtained state-of-the-art geometric alignment. Moreover, we achieved real-time stitching of camera and video streams up to 120 fps at 4K resolution. After stitching, we encode VR content for IP multicasting.
CITATION STYLE
Saeed, S., Kakli, M. U., Cho, Y., Seo, J., & Park, U. (2020). A high-quality VR calibration and real-time stitching framework using preprocessed features. IEEE Access, 8, 190300–190311. https://doi.org/10.1109/ACCESS.2020.3031413
Mendeley helps you to discover research relevant for your work.