Existing video frame synthesis works suffer from improving perceptual quality and preserving semantic representation ability. In this paper, we propose a Progressive Motion-texture Synthesis Network (PMSN) to address this problem. Instead of learning synthesis from scratch, we introduce augmented inputs to compensate texture details and motion information. Specifically, a coarse-to-fine guidance scheme with a well-designed semantic loss is presented to improve the capability of video frame synthesis. As shown in the experiments, our proposed PMSN promises excellent quantitative results, visual effects, and generalization ability compared with traditional solutions.
CITATION STYLE
Jin, X., Chen, Z., Liu, S., & Zhou, W. (2018). Augmented coarse-to-fine video frame synthesis with semantic loss. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11256 LNCS, pp. 439–452). Springer Verlag. https://doi.org/10.1007/978-3-030-03398-9_38
Mendeley helps you to discover research relevant for your work.