The article presents a novel adversarial defense method leveraging sample distribution shifts and pretrained diffusion models. The proposed method exploits the discrepancy between normal and adversarial sample distributions to counter adversarial attacks, maintaining a balance between model accuracy, robustness, and generalization. It outperforms existing defenses and remains effective even when attackers are aware of the defense. The method was tested on CIFAR10 and ImageNet30 datasets and showed high accuracy rates. The study indicates that the proposed method can effectively withstand adversarial examples, addressing the gaps in traditional approaches and showcasing superior performance in model robustness and generalization.

 

Publication date: 23 Nov 2023
Project Page: https://arxiv.org/abs/2311.13841v1
Paper: https://arxiv.org/pdf/2311.13841