The paper addresses the challenge of performance degradation in Differentially Private Stochastic Gradient Descent with gradient clipping (DPSGD-GC), a tool for training deep learning models. The authors propose an error-feedback (EF) DP algorithm, which offers a diminishing utility bound without a constant clipping bias and allows for flexible clipping thresholds. The algorithm provides privacy guarantees based on Renyi DP and under certain conditions, can achieve nearly the same utility bound as DPSGD without gradient clipping. The empirical results on Cifar-10/100 and E2E datasets show that the proposed algorithm achieves higher accuracies than DPSGD while maintaining the same level of DP guarantee.

 

Publication date: 24 Nov 2023
Project Page: https://arxiv.org/abs/2311.14632v1
Paper: https://arxiv.org/pdf/2311.14632