The article discusses the vulnerability of deep neural networks, specifically in semantic segmentation tasks, to adversarial attacks – perturbations of the input that lead to incorrect predictions. The authors propose uncertainty-based weighting schemes for the loss functions of such attacks, which prioritize easily perturbed pixel classifications and disregard those already confidently misclassified. These schemes can be integrated into the loss function of various adversarial attackers with minimal computational overhead, leading to significantly improved perturbation performance. This development is crucial as adversarial attacks pose a risk in safety-related applications like automated driving, therefore, efficient defense strategies against them are of utmost interest.

 

Publication date: 26 Oct 2023
Project Page: https://arxiv.org/abs/2310.17436v1
Paper: https://arxiv.org/pdf/2310.17436