This paper introduces Brave, a protocol that ensures Byzantine resilience and privacy-preserving property for Peer-to-Peer (P2P) Federated Learning (FL). In FL, multiple participants collaboratively train a global machine learning model without sharing their private data. P2P FL advances existing centralized FL paradigms by eliminating the need for a central server. However, P2P FL is vulnerable to honest-but-curious participants who try to infer private data from others and Byzantine participants who could manipulate local models to disrupt the learning process. Brave is designed to tackle these challenges and has been shown to be effective against three state-of-the-art adversaries on a P2P FL for image classification tasks on benchmark datasets CIFAR10 and MNIST.

 

Publication date: 10 Jan 2024
Project Page: arXiv:2401.05562v1
Paper: https://arxiv.org/pdf/2401.05562