The article introduces WIMA (Window-based Model Averaging), a method designed to enhance the robustness and generalization capabilities of global models in Federated Learning (FL). FL aims to develop a shared global model from distributed users while maintaining their privacy. However, when data are distributed heterogeneously, the learning process can become noisy and biased. WIMA addresses these issues by aggregating global models from different rounds using a window-based approach. This method effectively captures knowledge from multiple users and reduces bias. The authors claim WIMA introduces no additional communication or client-side computation overhead, and it can be integrated with other algorithms. The effectiveness of WIMA is demonstrated through various standard FL benchmarks.

 

Publication date: 3 Oct 2023
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2310.01366