The article discusses the importance of unlearning in machine learning models, particularly in the context of data regulations like GDPR. Unlearning involves removing the influence of certain data from a trained model, which is a challenging task. The authors address the zero-shot unlearning scenario, where an unlearning algorithm must remove data given only a trained model and the data to be forgotten. They present a method based on Lipschitz continuity that induces smoothing of the forget sample’s output, resulting in successful forgetting while preserving the general model performance. The method is tested over a range of benchmarks, showing that it achieves state-of-the-art performance under the strict constraints of zero-shot unlearning.
Publication date: 2 Feb 2024
Project Page: https://github.com/jwf40/Zeroshot-Unlearning-At-Scale
Paper: https://arxiv.org/pdf/2402.01401