Generating video textures by PPCA and gaussian process dynamical model

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Video texture is a new type of medium which can provide a continuous, infinitely varying stream of video images from a recorded video clip. It can be synthesized by rearranging the order of frames based on the similarities between all pairs of frames. In this paper, we propose a new method for generating video textures by implementing probabilistic principal components analysis (PPCA) and Gaussian Process Dynamical model (GPDM). Compared to the original video texture technique, video texture synthesized by PPCA and GPDM has the following advantages: it might generate new video frames that have never existed in the input video clip before; the problem of "dead-end" is totally avoided; it could also provide video textures that are more robust to noise. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Fan, W., & Bouguila, N. (2009). Generating video textures by PPCA and gaussian process dynamical model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5856 LNCS, pp. 801–808). https://doi.org/10.1007/978-3-642-10268-4_94

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free