The paper presents Adv-Diffusion, a framework for creating adversarial attacks on face recognition models. These attacks involve adding subtle perturbations to the source image to cause a misclassification by the target model. Existing methods for generating adversarial face images have limitations, including low transferability and high detectability. Adv-Diffusion addresses these issues by generating perturbations in the latent space rather than the raw pixel space, leveraging the latent diffusion model’s strong in-painting capabilities. The approach ensures both attack transferability and stealthiness. The method has been tested on public FFHQ and CelebA-HQ datasets, demonstrating superior performance compared to state-of-the-art methods.

 

Publication date: 19 Dec 2023
Project Page: https://github.com/kopper-xdu/Adv-Diffusion
Paper: https://arxiv.org/pdf/2312.11285