The article introduces E2GAN, a novel approach for efficient training of GANs for image-to-image translation. This technique uses data distillation from large-scale text-to-image diffusion models, such as Stable Diffusion, for training. E2GAN constructs a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch. It identifies crucial layers within the base GAN model and employs Low-Rank Adaptation (LoRA), reducing the overall training time. The proposed method can perform real-time high-quality image editing on mobile devices with significantly reduced training cost and storage for each concept.
Publication date: 12 Jan 2024
Project Page: https://yifanfanfanfan.github.io/e2gan/
Paper: https://arxiv.org/pdf/2401.06127