The paper discusses Deep Gradient Leakage (DGL), an attack that recovers private training images from gradient vectors, posing significant privacy challenges in distributed learning. To defend against such attacks, the authors propose a novel Inversion Influence Function (I2F) that establishes a connection between the recovered images and the private gradients. I2F is scalable for analyzing deep networks, requiring only oracle access to gradients and Jacobian-vector products. The authors demonstrate that I2F effectively approximated the DGL on different model architectures, datasets, attack implementations, and noise-based defenses. The tool provides insights into effective gradient perturbation directions, the unfairness of privacy protection, and privacy-preferred model initialization.

 

Publication date: 22 Sep 2023
Project Page: https://github.com/illidanlab/inversion-influence-function
Paper: https://arxiv.org/pdf/2309.13016