The article presents a new model training approach for federated learning called FLex&Chill, which utilizes the Logit Chilling method. This method addresses the challenges posed by data heterogeneity in federated learning systems, particularly the non-iid (not independently and identically distributed) data characteristics. FLex&Chill aims to expedite model convergence and enhance inference accuracy in these systems. The study shows that this approach can improve global federated learning model convergence time by up to 6% and inference accuracy by up to 3.37%. The paper also discusses the potential of federated learning in distributed environments, its challenges, and the efforts to optimize local training and server-side aggregation processes.

 

Publication date: 19 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.09986