The article focuses on distillation-based federated learning, a collaborative learning approach that reduces the risk of privacy invasion attacks and facilitates heterogeneous learning. The authors discuss the limitations of traditional data poisoning strategies that target model parameters rather than logit vectors. They propose a two-stage scheme for logit poisoning attacks and an efficient defense algorithm. The study highlights the significant threat posed by the logit poisoning attack and the effectiveness of the proposed defense algorithm.

 

Publication date: 1 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.17746