A short Introduction

Neural style transfer is an optimization technique used to take two images : a content image and a style reference image (such as an artwork by a famous painter) and blend them together such that the output image look like the content image, but “painted” in the style of the style image. This is well explained in Image Style Transfer Using Convolutional Neural Networks, by Gatys. I have tried to implement style transfer on my own in pytorch as part of my pytorch scholarship program on Udacity and the link to the same is attached at the bottom of the post.


In this notebook demonstrating style transfer, I have used the features found in the 19-layer VGG Network, which is comprised of a series of convolutional and pooling layers, and a few fully-connected layers. In the image below, the convolutional layers are named by stack and their order in the stack. Conv_1_1 is the first convolutional layer that an image is passed through, in the first stack . Conv_2_1 is the first convolutional layer in the second stack. The deepest convolutional layer in the network is conv_5_4. I have used a pre-trained VGG19 Net to extract content or style features from a passed in image.

Link to the notebook : https://github.com/pswaldia/NeuralStyleTransfer

Following are the examples of the kind of art we can produce using deep learning technique mentioned above.