The paper introduces DPSUR, a new privacy-preserving machine learning framework that selectively updates and releases data during model training. This technique is designed to protect against privacy attacks such as model inversion and membership inference, which have become increasingly problematic as machine learning models are known to memorize private data. DPSUR addresses the slow convergence and severe utility loss issues plaguing existing methods like DPSGD by evaluating the gradient from each iteration based on a validation test and only applying those updates that lead to convergence. The paper demonstrates that DPSUR outperforms previous works in terms of convergence speed and model utility.
Publication date: 27 Nov 2023
Project Page: N/A
Paper: https://arxiv.org/pdf/2311.14056