Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
The paper addresses the challenge of performance degradation in Differentially Private Stochastic Gradient Descent with gradient clipping (DPSGD-GC), a tool for training deep learning models. The authors propose an error-feedback…
Continue reading