Game-Theoretic Unlearnable Example Generator
This paper discusses unlearnable example attacks, which are data poisoning attacks that aim to degrade the accuracy of deep learning by adding imperceptible perturbations to training samples. The attack is…
Continue reading