I am a Ph.D. student in Computer Science at Tel-Aviv University, advised by Prof. Daniel Cohen-Or.
My main research interests are computer vision, computer graphics and representation
learning.
Before that, I was fortunate to collaborate with Kfir Aberman while interning at Google
Research.
Previously, I received M.Sc. (summa cum laude) in Computer Science from Tel-Aviv Univeristy
and B.Sc. (cum laude) in Applied Mathematics from Bar-Ilan University.
We present a method to expand the generated domain of a pretrained generator while
respecting its existing structure and knowledge. To this end, we identify dormant regions
of the model's latent space and repurpose only them to model new concepts.
We present the first personalized face generator, trained for a specific individual from
~100 of their images.
The model now holds a personalized prior, faithfully representing their unique appearance.
We then leverage the personalized prior to solve a number of ill-posed image enhancement and
editing tasks.
Two models as considered aligned if they share the same architecture, and one of them (the
child) is obtained
from the other (the parent) via fine-tuning to another domain, a common practice in transfer
learning.
In this paper, we perform an in-depth study of the properties and applications of aligned
generative models.
We propose a novel method for solving regression tasks using few-shot or weak supervision.
At the core
of our method is the observation that the distance of a latent code from a semantic
hyperplane is roughly
linearly correlated with the magnitude of the said semantic property in the image
corresponding to the latent code.
We identify the existence of distortion-editability and distortion-perception tradeoffs
within the
StyleGAN latent space on inverted images. Accordingly, we suggest two principles for
designing
encoders that are suitable for facilitating editing on real images by balancing these
tradeoffs.