Yotam Nitzan

I am a Ph.D. student in Computer Science at Tel-Aviv University, advised by Prof. Daniel Cohen-Or. My main research interests are computer vision, computer graphics and representation learning.

I'm currently interning at Adobe Research, working with Eli Shechtman , Michaƫl Gharbi , Taesung Park , Richard Zhang , and Jun-Yan Zhu .

Before that, I was fortunate to collaborate with Kfir Aberman while interning at Google Research.

Previously, I received M.Sc. (summa cum laude) in Computer Science from Tel-Aviv Univeristy and B.Sc. (cum laude) in Applied Mathematics from Bar-Ilan University.

Email  /  Github  /  Google Scholar  /  Twitter

profile photo

MyStyle: A Personalized Generative Prior
Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, Daniel Cohen-Or
SIGGRAPH Asia, 2022 (Journal Track)
project page / arXiv / video / code

We present the first personalized face generator, trained for a specific individual from ~100 of their images. The model now holds a personalized prior, faithfully representing their unique appearance. We then leverage the personalized prior to solve a number of ill-posed image enhancement and editing tasks.

StyleAlign: Analysis and Applications of Aligned StyleGAN Models
Zongze Wu, Yotam Nitzan, Eli Shechtman, Dani Lischinski
ICLR, 2022 (Oral)
arXiv / video / code

Two models as considered aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. In this paper, we perform an in-depth study of the properties and applications of aligned generative models.

LARGE: Latent-Based Regression through GAN Semantics
Yotam Nitzan*, Rinon Gal*, Ofir Brenner, Daniel Cohen-Or
CVPR, 2022
arXiv / code

We propose a novel method for solving regression tasks using few-shot or weak supervision. At the core of our method is the observation that the distance of a latent code from a semantic hyperplane is roughly linearly correlated with the magnitude of the said semantic property in the image corresponding to the latent code.

Designing an Encoder for StyleGAN Image Manipulation
Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or
arXiv / code

We identify the existence of distortion-editability and distortion-perception tradeoffs within the StyleGAN latent space on inverted images. Accordingly, we suggest two principles for designing encoders that are suitable for facilitating editing on real images by balancing these tradeoffs.

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or
CVPR, 2021
project page / arXiv / code

A generic image-to-image framework, based on an encoder that directly maps into the latent space of a pretrained generator, StyleGAN2.

Face Identity Disentanglement via Latent Space Mapping
Yotam Nitzan, Amit Bermano, Yangyan Li, Daniel Cohen-Or
SIGGRAPH Asia, 2020
project page / arXiv / code

We propose to disentangle identity from other facial attributes by mapping directly into the latent space of a pretrained generator, StyleGAN.

Website template courtesy of Jon Barron.