The article discusses a new approach for generative modeling based on training a neural network to be idempotent. The idempotent operator can be applied sequentially without changing the outcome beyond the initial application. The proposed model is trained to map a source distribution to a target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. The model also adeptly projects corrupted or modified data back to the target manifold. This work is a step towards a global projector that enables projecting any input into a target data distribution.

 

Publication date: 2 Nov 2023
Project Page: https://assafshocher.github.io/IGN/
Paper: https://arxiv.org/pdf/2311.01462