|
|
|
|
|
|
|
|
|
We propose a controllable style transfer framework based on Implicit Neural Representation (INR) that pixel-wisely controls the stylized output via test-time training. Unlike traditional image optimization methods that often suffer from unstable convergence and learning-based methods that require intensive training and have limited generalization ability, we present a model optimization frame- work that optimizes the neural networks during test-time with explicit loss functions for style transfer. After being test-time trained once, thanks to the flexibility of the INR-based model, our framework can precisely control the stylized images in a pixel-wise manner and freely adjust image resolution without further optimization or training. We demonstrate several applications in Section 6. |
| |
F(a) image optimization-based methods that optimize an output image itself at test-time with explicit content and style losses, which often generate artifacts and produce limited performance due to the lack of image prior, (b) learning-based methods that require intensive offline training on large-scale training data and use pre-trained,fixed networks at test time, and (c) our model that learns the image prior with coordinate-based neural networks at test-time and makes controllable and continuous representation. |
| |
Each area within an image is stylized with a different degree of α (from the left, 0.25, 1.0, 0.0, 0.5). Our approach can precisely transfer the desired degree of style to the target area. However, other CNN-based methods show unclear separation at the boundary where a degree of style changes as their generator performs kernel-wise operations on given image features. |
| |
We propose a INR-based style transfer framework with test-time optimization. |
| |
| |
| |
| |
S. Kim, Y. Min, Y. Jung Y. Uh, S. Kim Controllable Style Transfer via Test-time Training of Implicit Neural Representation. (hosted on ArXiv) |
Acknowledgements |