The article is an extensive survey on the transferability of adversarial examples in Deep Neural Networks (DNNs). It discusses the vulnerability of DNNs to adversarial examples – inputs that can manipulate machine learning models into making wrong predictions, which raises safety concerns for critical applications. The transferability of these adversarial examples, where perturbations made for one model can deceive another, is an intriguing property that enables ‘black-box’ attacks. The article categorizes existing methods to enhance adversarial transferability and discusses the principles guiding each approach. The authors also extend their discussion to other vision tasks and beyond, highlighting the importance of fortifying DNNs against adversarial vulnerabilities.

 

Publication date: 26 Oct 2023
Project Page: https://arxiv.org/abs/2310.17626v1
Paper: https://arxiv.org/pdf/2310.17626