This paper discusses unlearnable example attacks, which are data poisoning attacks that aim to degrade the accuracy of deep learning by adding imperceptible perturbations to training samples. The attack is formulated as a Stackelberg game, a game-theoretic approach. A new attack method called the Game Unlearnable Example (GUE) is proposed, which includes a generative network model as the poison attacker and a novel payoff function to evaluate the performance of the poison. The study shows that GUE can effectively poison the model in various scenarios.

 

Publication date: 1 Feb 2024
Project Page: https://github.com/hong-xian/gue
Paper: https://arxiv.org/pdf/2401.17523