A compact model for viewpoint dependent texture synthesis

33Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A texture synthesis method is presented that generates similar texture from an example image. It is based on the emulation of simple but rather carefully chosen image intensity statistics. The resulting texture models are compact and no longer require the example image from which they were derived. They make explicit some structural aspects of the textures and the modeling allows knitting together different textures with convincingly looking transition zones. As textures are seldom flat, it is important to also model 3Deffects when textures change under changing viewpoint. The simulation of such changes is supported by the model, assuming examples for the different viewpoints are given.

Cite

CITATION STYLE

APA

Zalesny, A., & Van Gool, L. (2001). A compact model for viewpoint dependent texture synthesis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2018, pp. 124–143). Springer Verlag. https://doi.org/10.1007/3-540-45296-6_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free