Locally linear embedding based dynamic texture synthesis

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Dynamic textures are often modeled as a low-dimensional dynamic process. The process usually comprises an appearance model of dimension reduction, a Markovian dynamic model in latent space to synthesize consecutive new latent variables and a observation model to map new latent variables onto the observation space. Linear dynamic system(LDS) is effective in modeling simple dynamic scenes while is hard to capture the nonlinearities of video sequences, which often results in poor visual quality of the synthesized videos. In this paper,we propose a new framework for generating dynamic textures by using a new appearance model and a new observation model to preserves the non-linear correlation of video sequences. We use locally linear embedding(LLE) to create an manifold embedding of the input sequence, apply a Markovian dynamics to maintain the temporal coherence in the latent space and synthesize new manifold, and develop a novel neighbor embedding based method to reconstruct the new manifold into the image space to constitute new texture videos. Experiments show that our method is efficient in capturing complex appearance variation while maintaining the temporal coherence of the new synthesized texture videos.

Cite

CITATION STYLE

APA

Guo, W., You, X., Zhu, Z., Mou, Y., & Zheng, D. (2015). Locally linear embedding based dynamic texture synthesis. In Communications in Computer and Information Science (Vol. 546, pp. 287–295). Springer Verlag. https://doi.org/10.1007/978-3-662-48558-3_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free