The article ‘Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models’ addresses the problem of synthesizing multi-view optical illusions. These are images that change appearance upon a transformation, such as a flip or rotation. The authors propose a zero-shot method for creating these illusions from off-the-shelf text-to-image diffusion models. The method involves estimating the noise from different views of a noisy image, combining these noise estimates together, and denoising the image. The results demonstrate the effectiveness and flexibility of their method.

 

Publication date: 29 Nov 2023
Project Page: https://dangeng.github.io/visual_anagrams/
Paper: https://arxiv.org/pdf/2311.17919