Yotam Nitzan

I am a Research Scientist at Adobe in San Francisco.

Previously, I got my Ph.D. student in Computer Science at Tel-Aviv University, where I was advised by Prof. Daniel Cohen-Or. I've interened at Google Research and Adobe Research (x2). Before that, I received M.Sc. in Computer Science from Tel-Aviv Univeristy and B.Sc. in Applied Mathematics from Bar-Ilan University.

I'm focused on visual generative models, trying to make them more interactive, intuitive, and controllabe.

Email  /  Github  /  Google Scholar  /  Twitter

profile photo
Selected Publications

Lazy Diffusion Transformer for Interactive Image Editing
Yotam Nitzan, Zongze Wu, Richard Zhang, Eli Shechtman, Daniel Cohen-Or, Taesung Park, Michaël Gharbi
ECCV, 2024
project page / arXiv

LazyDiffusion is an efficient image editing architecture that updates only user-specified regions. Using a context encoder and a diffusion-based transformer decoder, it balances global context awareness with localized generation, achieving state-of-the-art quality and up to 10× speed improvements.

Domain Expansion of Image Generators
Yotam Nitzan, Michaël Gharbi, Richard Zhang, Taesung Park, Jun-Yan Zhu, Daniel Cohen-Or, Eli Shechtman
CVPR, 2023
project page / arXiv / code

We present a method to expand the generated domain of a pretrained generator while respecting its existing structure and knowledge. To this end, we identify dormant regions of the model's latent space and repurpose only them to model new concepts.

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN
Amit H Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan,
Omer Tov, Or Patashnik, Daniel Cohen-Or
Eurographics, 2022 (STARs)
arXiv

We survey the family of StyleGAN models and how they have been employed for downstream applications since their inception.

MyStyle: A Personalized Generative Prior
Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, Daniel Cohen-Or
SIGGRAPH Asia, 2022 (Journal Track)
project page / arXiv / video / code

We present the first personalized face generator, trained for a specific individual from ~100 of their images. The model now holds a personalized prior, faithfully representing their unique appearance. We then leverage the personalized prior to solve a number of ill-posed image enhancement and editing tasks.

StyleAlign: Analysis and Applications of Aligned StyleGAN Models
Zongze Wu, Yotam Nitzan, Eli Shechtman, Dani Lischinski
ICLR, 2022 (Oral)
arXiv / video / code

Two models as considered aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. In this paper, we perform an in-depth study of the properties and applications of aligned generative models.

LARGE: Latent-Based Regression through GAN Semantics
Yotam Nitzan*, Rinon Gal*, Ofir Brenner, Daniel Cohen-Or
CVPR, 2022
arXiv / code

We propose a novel method for solving regression tasks using few-shot or weak supervision. At the core of our method is the observation that the distance of a latent code from a semantic hyperplane is roughly linearly correlated with the magnitude of the said semantic property in the image corresponding to the latent code.

Designing an Encoder for StyleGAN Image Manipulation
Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or
SIGGRAPH, 2021
arXiv / code

We identify the existence of distortion-editability and distortion-perception tradeoffs within the StyleGAN latent space on inverted images. Accordingly, we suggest two principles for designing encoders that are suitable for facilitating editing on real images by balancing these tradeoffs.

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or
CVPR, 2021
project page / arXiv / code

A generic image-to-image framework, based on an encoder that directly maps into the latent space of a pretrained generator, StyleGAN2.

Face Identity Disentanglement via Latent Space Mapping
Yotam Nitzan, Amit Bermano, Yangyan Li, Daniel Cohen-Or
SIGGRAPH Asia, 2020
project page / arXiv / code

We propose to disentangle identity from other facial attributes by mapping directly into the latent space of a pretrained generator, StyleGAN.


Website template courtesy of Jon Barron.