We present a method to expand the generated domain of a pretrained generator while
respecting its existing structure and knowledge. To this end, we identify dormant regions
of the model's latent space and repurpose only them to model new concepts.
We present the first personalized face generator, trained for a specific individual from
~100 of their images.
The model now holds a personalized prior, faithfully representing their unique appearance.
We then leverage the personalized prior to solve a number of ill-posed image enhancement and
Two models as considered aligned if they share the same architecture, and one of them (the
child) is obtained
from the other (the parent) via fine-tuning to another domain, a common practice in transfer
In this paper, we perform an in-depth study of the properties and applications of aligned
We propose a novel method for solving regression tasks using few-shot or weak supervision.
At the core
of our method is the observation that the distance of a latent code from a semantic
hyperplane is roughly
linearly correlated with the magnitude of the said semantic property in the image
corresponding to the latent code.
We identify the existence of distortion-editability and distortion-perception tradeoffs
StyleGAN latent space on inverted images. Accordingly, we suggest two principles for
encoders that are suitable for facilitating editing on real images by balancing these