It has always been a challenge for drawing novices to start learning from a zero proficiency level, oftentimes the process itself can discourage them from continuing. Imagine if there was a method that could automatically transform line shapes into arbitrary styles, it would be a time-saving wonder. In this paper, based on the machine translation method of Sequence to Sequence Learning model, we approximately regard the lines in the image as 'words', and the long lines as 'sentences', using data extracted from paired images to train. Our model extracts line features, and then transfers the features to the lines that formed the input images to generate the output images, which could be understood as an emulation of the machine translation process between two languages. The performance of our model has achieved promising results, attesting that our model has an excellent performance when it comes to the line style transfer using Sequence to Sequence Learning. Our method can be used as a supplement to the GAN model and expand the application and research of image style transfer.
CITATION STYLE
He, W., & Chen, L. (2020). A Research of Neural Style Transfer on Line Structure Based on Sequence to Sequence Learning. IEEE Access, 8, 112309–112322. https://doi.org/10.1109/ACCESS.2020.3002572
Mendeley helps you to discover research relevant for your work.