Smoothing Splines

  • Knott G
N/ACitations
Citations of this article
47Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There has been a trend of increasing scales in many Augmented Reality (AR) and Mixed Reality (MR) applications, in both capturing size of the environment using SLAM, or displaying Field of View (FOV) of the digital imageries. However, the Optical See-Through (OST) methods have limited FOV and involve complex design and fabrication. Video See-Through (VST) Head-Mount Display (HMD), on the other hand, has much larger FOV and is easier/cheaper to manufacture. Moreover, it is relatively easy to make virtual objects occlude world objects in video stream. But the drawbacks is that a huge lag is imposed on all contents (i.e. world imagery and digital content) due to video capturing, processing, and rendering. In this paper, we present a system that implements a stereo VST HMD with world imagery that has high quality (2560×1440 @ 90fps) and low latency (≤30ms). The system utilizes an FPGA that splits the world imagery stream into two datapaths, for high-resolution displaying and low-resolution processing. Thus the SLAM algorithm running on the connected computer is performed on a down-sampled video stream for overlaying digital objects. Before being displayed, the processed video from the computer is synchronized and fused with the high-resolution world imagery.

Cite

CITATION STYLE

APA

Knott, G. D. (2000). Smoothing Splines. In Interpolating Cubic Splines (pp. 123–132). Birkhäuser Boston. https://doi.org/10.1007/978-1-4612-1320-8_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free